Lab Manager | Run Your Lab Like a Business
Male scientist programs a robotic arm in a laboratory

Design and Implement Lab Automation to Increase Safety

Use the hierarchy of controls, PTD, and human factors to drive down risk

Jonathan Klane, M.S.Ed., CIH, CSP, CHMM, CIT

Jonathan Klane, M.S.Ed., CIH, CSP, CHMM, CIT, is senior safety editor for Lab Manager. His EHS and risk career spans more than three decades in various roles as a...

ViewFull Profile.
Learn about ourEditorial Policies.
Register for free to listen to this article
Listen with Speechify

As the engineering college’s “safety guy,” I was often asked to approve student and researcher projects. 

During one of these instances, I approached the automation testing display that the students had rebuilt from an industry donation. The first thing I noticed was the huge orange robotic arm. 

Get training in Technical Safety Topics and earn CEUs.An IACET-accredited five-course stream in the Academy.
Technical Safety Topics Stream

It looked like someone had torn off The Thing’s arm and plunked it in the middle of a boxing ring. The Thing was in the comic book, The Fantastic Four, and could stop a Mack truck. 

Outrage is an old survival tool

Humans can be downright hostile to machines and robotic applications. About 200 years ago, Luddites allegedly sabotaged textile machinery they were outraged about.2 Recently, self-driving cars have been attacked with knives and rocks by outraged citizens.3 

Outrage is a holdover from our evolution when it helped us survive. If a member of our tribal group behaved outrageously and brought risk upon us (e.g., from a prowling saber tooth tiger), he was likely kicked out, became that tiger’s lunch, and his genes weren’t passed on to future generations. 

We still harbor outrage today. If the new automated lab equipment is perceived as a threat to value (e.g., economic or sense of self), it may be unused, worked around, or suffer some sort of mysterious damage. So how can lab managers mitigate these negative outcomes? Design it with human risks in mind and implement it willingly by the team. 

A question to ask is, how can automation alter lab configuration or process? People need reassurance, meaningful involvement, and a sense of control. No one wants to be assimilated, and resistance often isn’t futile in work settings. As the economist, Steven Levitt, puts it: “The more rules put in place, the more ways humans will find to circumvent them.” 

It’s all about the design

Machines are designed before being built or sold to users. Some are designed well while others aren’t. One aspect is how well it is designed to reduce risks and be safe. There are three design principles to keep in mind—prevention-through-design (PtD), the hierarchy of hazard controls, and Raymond Loewy’s MAYA principle.

PtD is an overarching approach to reduce risk during the design phase of any project. With automated lab equipment, it can involve anticipating typical human curiosity, behaviors, and reactions. It might be enclosing moving parts, using fasteners that aren’t easy to undo and open, or lower voltages/amperages or slower motions that can be stopped in an instant.

The toughest risk factor to determine is probability.

The hierarchy of hazard controls is a prioritized approach to implementing controls. The two principles for it are to reduce risk the most and keep the hazard as far from humans as possible. Here’s the order with specific examples: 

  1. Eliminate the use of a toxic chemical via a new process 
  2. Substitute a hazardous manual process with an automated one
  3. Isolate automated equipment from workers
  4. Engineer guards and auto-shutdown features 
  5. Work practices might be enhanced or altered
  6. Administrate effective training about robotic equipment 
  7. Personal protective equipment (PPE) may not be as necessary

MAYA is a design principle originated by Raymond Loewy, the father of industrial design.4 MAYA stands for most advanced yet acceptable. He discovered through testing his designs that humans want things to feel advanced and innovative, just not too much. They love the familiar but not anything alien or unnatural. They’ll use items that are intuitive but will reject those that are arcane or dense. If that new piece of lab automation is too advanced, don’t be surprised when staff reject it. 

What could occur, will occur

Humans overly rely on technologies to perform to specification. But machinery fails, glitches, or breaks down. Humans are also notoriously inconsistent and unpredictable. The toughest risk factor to determine is probability. Given time and repetition, what could occur, will occur. 

Our intended processes are what we call “as imagined” and aren’t “as performed”. Things go awry for many reasons. And when an unanticipated human reaction meets an unstoppable machine cycle, the machine wins the battle every time.  

A successful case 

The Thing’s arm didn’t have a mind of its own, the student-researchers had a manual controller. It was two-handed with a high/low dead man’s switch. If they stopped squeezing it or let go, then the arm stopped dead. Also, if they squeezed it too tightly (like you’re startled or scared), it also stopped dead. 

The wiring was impressive, too. Everything was neatly arranged, connected, and protected. They’d followed both the design specs and an electrical standard. It was verified by an electrical systems designer and field tested by an electrician. 

After witnessing The Thing in action, I had a reaction reminiscent of the movie Jaws. “You’re ‘gonna need a bigger enclosure”. And they were receptive to my suggestion. They expanded the “hazard zone” by two feet and erected a safety barrier to enforce it.

And thankfully it operated very slowly, no “rapid jabs” were allowed. We established a safe set back distance with barriers so users or observers wouldn’t inch forward. I approved its use. Their results? No incidents or injuries occurred during my time there. 

Exposure to and control of hazardous energy 

Equipment is powered and stores potentially hazardous energy forms (e.g., electrical, mechanical, thermal, etc.). And these machines occasionally jam, get stuck, or break down. Do your people try to unjam, fix, or service it? If so, what injuries do they risk? Some include electrical shock, mechanical crushing, steam burns, eye splatters, etc.

Human needs come first over automated equipment or robotic machines.

Training and a program in lockout/tagout (LOTO) are needed to reduce risks of injuries. There are two training levels depending on what people are allowed to do. For those who will perform work on energized equipment, authorized LOTO level training is required. And awareness-level training is necessary for everyone else who could be affected by LOTO operations. 

Don’t forget these principles 

Start with PtD and let it drive your risk reduction process. Follow the hierarchy of hazard controls in order. Prosecute the problem statement with vigor. Ask yourself (or others) if elimination or substitution are feasible options. And don’t automatically resort to a PPE-first mindset— when it fails, the wearer gets exposed to the hazard. 

Human needs come first over automated equipment or robotic machines. A well-known saying in safety is, “Protect people, property, and the environment”. Note the order of those—people first. Equipment must be designed and used safely to help improve the work setting. 

Train everyone on LOTO. They’re either authorized to perform repairs and unjam it (and need more intensive training) or they’re affected by LOTO procedures done by others (and need awareness level training). 

Implement applicable standards as they help improve your lab’s PtD approaches. There are too many to try, so find which one(s) will help your lab’s safety be the most effective. 

Test your systems for operational safety. Does it work as intended? What about the human-machine interface? Be sure to perform safe tests on each step in the process.  

And most important, prevent battles between machines and flesh and blood, because machines win every time, and it’s not pretty. 


1. Kahneman, Daniel. Thinking, fast and slow. Farrar, Straus and Giroux; First Edition. April 2, 2013. ISBN-13: 978-0374533557. 

2. Conniff, Richard. “What the Luddites Really Fought Against”. Smithsonian Magazine. March 2011.

3. Picchi, Aimee. “Waymo's self-driving cars have faced slashed tires, thrown rocks”. CBS News. Dec. 13, 2018.

4. Dam, Rikke Friis. “The MAYA Principle: Design for the Future, but Balance it with Your Users’ Present”. Jan. 22, 2021. Interaction Design Foundation