Lab Manager | Run Your Lab Like a Business

Ask the Expert

The Future of Lab Robotics

The Future of Lab Robotics

Matthew Gombolay, PhD, discusses the hopes and fears of a successful robotic implementation and advises readers on scenarios where such a change will most likely be successful

Tanuja Koppal, PhD
Matthew Gombolay, PhD.

Matthew Gombolay, PhD, assistant professor of Interactive Computing at the Georgia Institute of Technology, talks to contributing editor Tanuja Koppal, PhD, about the future of lab robotics. He discusses the hopes and fears of a successful robotic implementation and advises readers on scenarios where such a change will most likely be successful.


Q: How is robotics going to impact the workings in a lab in the short and long term?

A: In the short term (one to two years), I do not think we will see significant changes in laboratories resulting from robotics. Traditional robotics has been around for decades and it only has utility in large-scale, high-throughput operations. As such, laboratories with robust financial support and a high volume of non-dexterous tasks (e.g., pipetting), might continue to benefit from modern robotics. I might speculate that, perhaps, the pandemic might accelerate plans to automate due to the challenge of having humans in close proximity; however, that remains to be seen. 

Back in 2012, there was the potential for an exciting revolution in flexible, adaptable robotics. This charge was led by the likes of Rodney Brooks, who started both iRobot (the Roomba company) and Rethink Robotics. Rethink Robotics came out with Baxter, a two-arm robot priced relatively cheaply at about $20,000-$30,000, that could be programmed by end users (not roboticists!) to pick, place, and stack objects. This robot was considered by some to be “inherently safe” (e.g., it was designed to not seriously injure the user if it were to hit the user) owing to its plastic/rubber exterior and series-elastic actuators (motors with springs). Unfortunately, the series-elastic actuators meant to make the robot safe also made it inaccurate. The visual-serving was also imprecise. The company did not fare well and shut down in 2018. The HAHN Group GmbH has recently revived it and I hope they succeed in making Baxter a commercial success.

I think the five- to 10-year time frame is the right window to think about how another attempt at a “Baxter” could revolutionize small-scale commercial operations in labs. If we can make a robot that costs $20,000 while being intelligent enough and physically capable enough to perform helpful tasks (something more than vacuuming), that would be a game-changer. We could see robots in a lab studying rat models of diseases assisting with snipping and analyzing the tails of rats (no more bitten hands), to robots that could even pose their own hypotheses and design their own experiments. However, we are not quite there yet.

Q: What are some of the biggest concerns that people have when it comes to adopting, integrating, and implementing robotics into their workflows? How can some of those concerns be addressed?

A: I think the biggest concern people have when adopting any technology is how long it will take to get a return on their investment (if they ever see a return at all). This concern is no different for robotics. What separates “robots” from, perhaps, a polymerase chain reaction machine is that robots are supposed to be “general purpose.” Robots are supposed to be adaptable to any use case, capable of moving around in their environment, and able to physically interact with their operating environment. Another key difference may often be that robots are still niche enough that adopting, integrating, and implementing robots in a workflow requires a robotics company to consult on the application, develop the control algorithms, install the robots, and tune the system to work. The fixed cost of this installation can be a showstopper for a purchaser that does not have a sufficiently high throughput to make the investment worth it. This is the reason why automotive manufacturing is >50 percent automated with robotics, aerospace final assembly only 20 percent, and submarine assembly approximately zero percent automated—it has to do with throughput. In an academic setting, it is still more cost-effective to pay a graduate student to pipette than it is to purchase and program a robot.

The “elephant in the room,” of course, is that there is a fear of robots replacing human work. Robots have been displacing jobs for decades—that is true. While there is evidence that the productivity of US workers has increased with wages, however, remaining relatively stagnant, we have not yet seen a corresponding increase in the unemployment rate. Nonetheless, if a company wants to adopt a robot technology, there needs to be a plan to address fears truthfully. For those workers who are displaced, employers should have a training and retention program to help shift those workers to roles of “human operators/technicians of collaborative robot technology” or elsewhere in other roles. Ethically, I believe the productivity gains from adopting robots should be fed back into supporting the displaced human workers and helping the employees reap the benefits of that increased productivity. Such programs should be developed hand-in-hand with the human workers and agreed to before any adoption of the technology takes place.

Q: Can you share some of the details of your findings to help design robots that can function intelligently and collaboratively?

A: I have been conducting experiments over the last decade to understand how to design robots to be intelligent, collaborative partners for humans. I have found quite a lot! If I were going to share one finding, it would be that intelligent robots are a double-edged sword. In the right settings, robots can increase the satisfaction of human teammates by enhancing productivity and decreasing the burden placed on humans for tasks that they are not well-suited to. However, robots can also set people up to fail in critical ways. For example, I found that if a human supervisor is asked to turn over decision-making responsibility to a robot for developing a work team schedule, the human supervisor will immediately lose track of who is doing what (and who is supposed to do what). We call this degraded situational awareness. It is a problem. In my lab, we are specifically developing robot behaviors and human-robot interaction modalities that enable a robot to regulate human situational awareness, make sure the human does not over-trust the robot, and make sure the human can take over in the event the robot fails. Like humans, robots will fail (e.g., the high-profile crashes for test automobiles running on autopilot). I want to make sure we design the robot to never lull the user into a sense of complacency unless it truly is safe. 

Q: What advice would you give to lab managers who are looking to invest in robotic technologies to improve the functioning and productivity of their labs?

A: In one sentence: Be willing to throw away the old way of doing things. Time and time again, I have seen organizations that want to reap the benefits of artificial intelligence and robotics spend millions of dollars to start developing and deploying the technology, only to fail. That failure often occurs because the roboticists identify obstacles to the deployment of those machines and need the customer to adapt their workflow. Because the customer has honed that workflow for years or decades, there is a fear of changing the status quo. There are really only two successful application models of robotics that I have personally seen. Organizations either build a new facility from the ground-up with the desired robotic technology incorporated into the design of that facility from the get-go, or they leverage flexible platforms (e.g., Rethink Robotics’ Baxter) for tasks that are not high precision. In the future, more robots like Baxter will come online, and I think that will bring about a revolution in enabling customers to deploy robots easily and flexibly.


Dr. Matthew Gombolay is an assistant professor of Interactive Computing at the Georgia Institute of Technology. He received a BS in mechanical engineering from Johns Hopkins University in 2011, a MS in aeronautics and astronautics from MIT in 2013, and a PhD in autonomous systems from MIT in 2017. Gombolay’s research interests span robotics, AI/ML, human-robot interaction, and operations research. Between defending his dissertation and joining the faculty at Georgia Tech, Dr. Gombolay served as a technical staff member at MIT Lincoln Laboratory, transitioning his research to the US Navy, earning him an R&D 100 Award. His publication record includes a best paper award from the American Institute of Aeronautics and Astronautics, a finalist for best student paper at the 2020 American Control Conference, and a finalist for best paper at the 2020 Conference on Robot Learning. Dr. Gombolay was selected as a DARPA Riser in 2018, received first place for the Early Career Award from the National Fire Control Symposium, and was awarded a NASA Early Career Fellowship for increasing science autonomy in space.