Artificial intelligence is increasingly embedded in workplace systems, from automated equipment control and process optimization to scheduling, monitoring, and decision-support tools. As adoption expands across industries, organizations are facing new questions about how these systems affect worker safety and health. New NIOSH AI workplace safety guidance outlines how employers can identify, assess, and manage these risks using established principles of occupational and environmental health and safety.
Rather than treating artificial intelligence as a distinct hazard category, the guidance reframes AI as a software-driven factor that modifies existing occupational hazards. This approach places workplace AI safety risks within familiar safety management frameworks, allowing employers to extend current hazard identification, exposure assessment, and control practices to AI-enabled systems.
Defining AI hazards in the workplace
The guidance emphasizes the importance of precise terminology, recommending the term “trained algorithm” over the broader, less specific label “artificial intelligence.” A trained algorithm uses data to influence a system’s output or behavior, such as adjusting operating parameters, prioritizing tasks, or triggering automated actions. This framing helps safety professionals focus on how software functions within real workplace systems.
NIOSH notes that algorithms have no physical substance and cannot directly create new physical, chemical, or biological hazards. However, trained algorithms can alter how existing hazards are introduced, controlled, or amplified by influencing equipment behavior, process timing, and human interaction with systems. In this way, AI reshapes the overall workplace risk profile rather than replacing traditional hazards.
The guidance also highlights psychosocial risks as a distinct concern. Algorithm-driven changes in work organization, performance monitoring, job autonomy, and skill requirements can increase cognitive load, stress, and uncertainty for workers. These effects are treated as legitimate occupational health risks that warrant the same level of assessment and control as tangible hazards.
The algorithmic hygiene framework
To support systematic evaluation of AI-enabled systems, NIOSH introduces the algorithmic hygiene framework. This proposed framework adapts industrial hygiene principles to software-driven technologies by linking algorithm characteristics to established hazard categories and control strategies.
The framework identifies several system characteristics that influence risk, including data and methodology design, trust between workers and algorithms, trust between workers and management, job reskilling demands, and cybersecurity and hardware integration. These characteristics interact with both tangible and psychosocial hazards, shaping exposure pathways and health outcomes.
NIOSH presents algorithmic hygiene as a starting point for fieldwork and future research rather than a finalized standard. The goal is to create a scientific basis for actionable guidance that employers, developers, and policymakers can use to support safe deployment of AI systems.
Prevention and control strategies
The guidance distinguishes between prevention strategies that can be implemented through workplace design and those that must be addressed through software design. Work design controls fall within the responsibility of employers and end users and may include redefining job roles, adding human oversight to automated decisions, updating standard operating procedures, and integrating AI systems into routine safety reviews.
NIOSH emphasizes that existing exposure assessment tools and methods remain relevant, even when algorithmic systems are complex or partially opaque. Applying familiar assessment approaches can help organizations understand how AI changes risk levels and where additional controls may be needed.
Software design controls, by contrast, must be implemented by developers and technology providers. These include building transparency into systems, conducting alignment evaluations to ensure system behavior matches intended design parameters, and addressing safety considerations early in development. The guidance encourages collaboration between developers and occupational safety professionals to manage risks across the AI system lifecycle.
Managing AI risks over time
NIOSH stresses that AI risk management does not end at deployment. Ongoing oversight may include independent audits, algorithmic transparency assessments, and voluntary AI system certification programs that incentivize trustworthy design practices. Structured methodologies such as safety system approaches and safety case approaches can help organizations document how workplace AI safety risks are identified, evaluated, and controlled over time.
These approaches support continuous improvement and provide evidence that AI-enabled systems remain safe to operate as conditions, data, and use cases evolve.
Why this guidance matters to lab managers
By grounding AI risk management in established occupational safety science, NIOSH AI workplace safety guidance provides lab managers with a practical pathway to address emerging risks without reinventing their safety programs. As trained algorithms become more common across industries, the algorithmic hygiene framework offers a structured way to integrate AI oversight into routine workplace safety and health management, reinforcing that AI safety is an operational responsibility, not solely a technical one.
This article was created with the assistance of Generative AI and has undergone editorial review before publishing.











