Laboratory automation is evolving beyond robotic arms and pipetting systems. Emerging technologies in robotics, data integration, and AI are transforming how labs operate—making experiments faster, more reproducible, and increasingly autonomous. To understand where the field is heading, Lab Manager spoke with Meghav Verma, VP of automation at Axle Informatics, who shared insights on the most exciting developments in lab automation, what’s real in AI/ML, and the challenges that stand between today’s labs and full autonomy.
Note: These responses have been edited for clarity and style.

Meghav Verma
Credit: Meghav Verma
Q: What recent advancements in lab robotics and automation have caught your attention?
A: (1) Smarter, lower-friction liquid handling. Acoustic droplet ejection has matured from niche to routine, letting us move nanoliters without tips, which is great for miniaturizing assays and cutting plastic and reagent spend. At the same time, I’m also seeing positive-displacement/non-contact dispensers that handle viscous or volatile liquids far more reliably than air-displacement systems, while also managing nanoliter scale. Together, these are making tiny-volume workflows practical.
(2) Open interfaces and standards. Adoption of SiLA 2 and data standards like the Allotrope Framework is gradually improving plug-and-play integration and data portability. We’re not there yet, but vendor support and open-source control layers like PyLabRobot and PyHamilton have meaningfully lowered the integration efforts for mixed-vendor cells.
(3) Lab-scale mobility. Autonomous mobile robots (AMRs) are moving plates, consumables, and samples between instruments and rooms—closing the gap between otherwise well-automated islands. Clinical labs led here, but research labs are catching up as fleet software and docking keep improving.
(4) Cloud-operated labs. Cloud labs are real and steadily expanding the menu of instruments you can drive remotely; some universities now run coursework and research this way, and the commercial options continue to harden. They’re not for every workflow, but they make high-end methods accessible without buying and maintaining the stack. At the same time, there is a use-case for virtual training for instrumentation and labs that is proving to be effective.
Q: AI/ML in lab automation—what’s real vs. overhyped?
A: I look for (a) closed-loop sensing—what is the model reading each cycle? (b) well-defined objective functions (throughput, yield, error rate), and (c) cycle time to learning—how quickly does a result change the next action? If any of these are fuzzy, it’s usually a research demo rather than a production-ready tool.
What’s real today
(1) Closed-loop optimization for narrow problem classes. When the physics are well-instrumented (e.g., reaction yield, catalyst activity, enzyme thermal stability), ML can propose the next best experiment and robots can execute it—continuously. This has delivered measurable improvements in several published campaigns.
(2) Computer vision for QC. Off-the-shelf models now catch missing tips, liquid level issues, turbidity, barcode errors, and plate positioning errors. This is inexpensive and has [an] immediate effect on data quality.
(3) Scheduling and data plumbing AI. Orchestrators and data platforms use rules/ML to de-bottleneck cells, validate instrument outputs, and eliminate manual transcription into ELNs/LIMS. While it is less glamorous than self-driving labs, it’s huge for throughput and compliance.
Lab Management Certificate
The Lab Management certificate is more than training—it’s a professional advantage.
Gain critical skills and IACET-approved CEUs that make a measurable difference.
What’s overhyped (for now)
(1) “Press a button and an LLM runs any wet-lab experiment end-to-end.” General-purpose autonomy still runs into non-standard hardware, non-standard protocols, messy metadata, and weak sensing. Even cutting-edge autonomous chemistry often relies on simplified infrastructure and limited analytics that don’t fully mirror human workflows.
(2) Model performance without data foundations. Many labs want AI benefits without FAIR data, instrument telemetry, and standardized context. Without that groundwork (and negative results!), model gains rarely translate into robust day-to-day performance.
Q: What are the biggest barriers to innovation in lab automation?
A: (1) Integration black box. Proprietary drivers and data formats still make multivendor cells costly to stand up and maintain. Standards help, but adoption is uneven across device classes. We should try to build around open protocols first; use vendor SDKs only when you must.
(2) Data hygiene and context. AI and analytics stall when instruments emit PDFs and CSVs with sparse metadata. Invest early in a data layer that standardizes, normalizes, annotates, and moves data automatically to ELN/LIMS/analytics. It’s tedious work upfront but foundationally critical.
(3) Human capital. Automation engineers who are cross-disciplined are scarce. I’ve had better luck with upskilling curious scientists/engineers with structured rotations than hiring unicorns that need to be trained in the field for a very long time.
(4) Procurement, security, and facility realities. In large/federal organizations, cloud services need FedRAMP and an Authority to Operate; new tools often wait for continuing resolutions that restrict new starts. Plan rollouts in phases, leverage preauthorized platforms, and avoid hard dependencies on a single fiscal window.
What has reliably worked for me is to start a pilot on a contained workflow that can prove a 3–6× metric (throughput, CV, hands-on time, data fidelity), then focus on scaling horizontally with standardization (protocol templates, drivers, and data schemas) rather than one-off integrations. Pair every hardware purchase with the data/IT work it implies; otherwise, ROI erodes.
Q: Is hardware automation plateauing compared to digital automation?
A: Short answer: No—hardware is quietly getting much better; it just ships on physical timelines. A few examples:
(1) Precision at tiny volumes (acoustic, non-contact, positive-displacement) plus inline verification is letting small teams run experiments that used to require a dedicated HTS core.
(2) Modular benches/workcells with integrated transport (e.g., magnetic conveyance layers, compact co-bots) are increasing walk-away time without consuming entire rooms.
(3) Autonomous mobile robots are stitching together islands of automation across rooms and floors; they’re already proven in clinical settings and are now entering research environments.
(4) Open, hardware-agnostic SDKs are shortening the time from idea to working protocol and reducing vendor lock-in. That lowers the risk for forward-leaning labs to try novel hardware.
Digital tools (schedulers, data connectors) are maturing faster and overnight, so we can orchestrate mixed instrumentation with less custom glue. But the physical layer—sensing, motion, liquid physics, sterility—has seen steady, material gains. The best results come from co-designing hardware and digital layers, not choosing one over the other.













