Lab Manager | Run Your Lab Like a Business
Vaporwave-style sun setting over a vector horizon
Credit: Lab Manager

Harnessing Innovative AI Solutions in Your Lab

Exploring AI-powered analytic and processing tools—and the challenges that accompany them

Holden Galusha

Holden Galusha is the associate editor for Lab Manager. He was a freelance contributing writer for Lab Manager before being invited to join the team full-time. Previously, he was the...

ViewFull Profile.
Learn about ourEditorial Policies.
Register for free to listen to this article
Listen with Speechify

Artificial intelligence (AI) has been a prevalent theme in the news cycle over recent months. AI, which is essentially a computer’s ability to mimic human intelligence, and machine learning (ML), a subset of AI that can improve the accuracy of its output autonomously by “training” on datasets, have the potential to upend entire industries. Some are skeptical, thinking the tech is overhyped. Meanwhile, to the awe of some and the apprehension of others, many believe AI will be a bigger revolution than the internet. One thing is certain: as is the pioneering nature of scientists, many laboratory professionals are finding ways to incorporate AI solutions into their workflows.

Leveraging innovative AI and ML solutions has opened new doors in data analysis, image processing, and lab monitoring—but in opening this Pandora’s Box, there also come significant challenges that the scientific community must address.

Get training in Managing for Safety and Risk Effectiveness and earn CEUs.One of over 25 IACET-accredited courses in the Academy.
Managing for Safety and Risk Effectiveness Course

Using AI to augment analysis

Currently, data analysis processes likely offer the most opportunities for bolstering your lab’s workflow with AI/ML. AI is particularly well-suited to augment analysis. It can detect patterns in data that are difficult, if not impossible, for humans to detect. This results in two primary benefits: (1) AI can increase lab throughput by quickening the analysis process, and (2) AI offers an additional layer of inspection—humans and machines work in tandem to check each other’s work and cover any gaps.

Several forms of AI have been applied in experiment data analysis, such as data processing and image analysis.

Data analysis

AI and ML have proven particularly useful in analytic techniques including chromatography, mass spectrometry, and spectroscopy. Science instrumentation manufacturers such as METTLER TOLEDO, Agilent Technologies, and JEOL have released AI solutions commercially that enhance the analytic capabilities of those using these techniques.

One such solution is Agilent’s MassHunter software. MassHunter is a suite of programs that facilitate efficient data collection, qualitative and quantitative analysis, reporting, and other functions involved with gas and liquid chromatography. In summer 2023, Agilent unveiled a new module for MassHunter: AI Peak Integration. AI Peak Integration leverages ML to automate chromatographic peak integration during data analysis, reducing the total processing time. Users can custom train the model by performing manual integrations for it to observe, and it will continue to self-learn and improve.

Similarly, JEOL’s msFineAnalysis AI software, designed for use with their JMS-T2000GC AccuTOF GC-Alpha mass spectrometer, uses two integrated AI models to synthesize GC/electron impact high resolution data, GC/soft ionization high resolution data, and structure analysis capabilities to automatically output detailed qualitative analyses. According to JEOL, msFineAnalysis can analyze 100 components in four seconds, while a skilled analyst takes 30 minutes to analyze just four components on average. With msFineAnalysis, an analyst can considerably widen their bandwidth.

...But in opening this Pandora's Box, there also come significant challenges that the scientific community must address.

Finally, METTLER TOLEDO’s AIWizard solution uses a neural network, a type of ML that mimics biological brains, for intelligent, automated evaluation of thermal effects measured by a direct scanning calorimeter. The network comes pretrained on thousands of data points originating from expert evaluations. Like a real brain, the network will continue to learn and improve itself as it’s used. With AIWizard, users can redirect their energies on drawing insight from the AI-evaluated data, saving time and effort.

Image processing

A form of AI commonly used in labs is image analysis and recognition technology. This technology is often used to identify elements of interest in microscopic photographs, medical scans, real-time camera feeds, etc. An example would be automated cell counters, which can independently count cells in microplate wells. Similarly, a 2022 study saw researchers leveraging ML to automatically identify isolated cells, which would ease the burden on humans to identify them in biomedical engineering processes.1

An ambitious project based on this technology is FathomNet, a repository of ocean images. These images are intended for use in training ML models in oceanographic applications. Announced in October 2022, FathomNet was founded on more than one million images and more than 28,000 hours of video, all annotated and contributed by Monterey Bay Aquarium Research Institute. Further data from the National Geographic Society and the National Oceanic and Atmospheric Administration were also added Ideally, FathomNet will accelerate ocean research and enable more effective ocean health monitoring, among other use cases.2

Open-source software and the future of laboratory AI

Many commercial AI/ML solutions are on the cutting-edge of technology, but more developments continue to be made in the open-source software space. Open-source software is software whose source code is freely available for anyone to download, modify, and run. There are numerous AI-powered, open-source lab software solutions currently in development, some of which have been showcased in journals. While few of them may be ready for production use, they may indicate what the future of AI analytic software looks like.

For instance, a 2022 article published in Bioinformatics introduced PeakBot, a ML model that automates chromatographic peak picking.3 According to the results of the study, “In training and independent validation datasets used for development, PeakBot achieved a high performance with respect to discriminating between chromatographic peaks and background signals (accuracy of 0.99).” While algorithm-driven chromatographic peak detection solutions such as XCMS and MS-Dial already exist, PeakBot, which was developed by researchers from the University of Vienna, further automates peak picking as it can be trained on user reference data, heightening accuracy. The code used to create the model is publicly available on GitHub, a cloud-based code repository.

AI is particularly well-suited to augment analysis. It can detect patterns in data that are difficult, if not impossible, for humans to detect.

Another 2022 article published in Nature Machine Intelligence details the creation of LC-MS2Struct, a ML model that structurally annotates metabolites more accurately than conventional liquid chromatography-tandem mass spectrometry (LC-MS2) scorers.4 The study authors built the model by compiling publicly available reversed-phase LC-MS2 data from public reference database MassBank. This training data consisted of 4,327 molecules measured under 18 unique LC conditions from 16 different labs. Having been trained on such a wide breadth of data from so many different labs, LC-MS2Struct can accurately distinguish stereochemical variants, which no other metabolite identification solution can do. Like PeakBot, the code for this project is available on GitHub.

Because open-source AI programs are driven by collaboration and research interest, they offer an agility in development that commercial solutions may find hard to match. As the open-source programs advance, they may serve as a forecast of the features awaiting proprietary solutions.

Using AI in monitoring

AI can be used for monitoring lab assets and their environment. Ultra-low temperature (ULT) freezers are a prime example. Temperature variations larger than a few tenths of a degree within a ULT freezer can deteriorate samples. However, these variations aren’t obvious to the human eye when reviewing ULT temperature logs manually. An AI is much better suited for identifying these patterns. Once its integrated AI flags a pattern of significant temperature variations, a freezer’s monitoring system can alert the user of the impending malfunction or failure, allowing them to address it preemptively and ensure that their valuable samples remain uncompromised. AI can also analyze how samples are arranged within a freezer and recommend different placements to help maintain temperature consistency across the samples.

Challenges of adopting AI/ML solutions

The advent of generative AI has opened discussion as to which types of workers will be replaced by the technology. Understandably, this has made workers across industries apprehensive about AI. However, in the case of scientific work, scientists need not worry just yet—even if AI performs manual analysis more efficiently than humans, its output must still be reviewed by humans to ensure accuracy and enforce accountability. 

While this does represent job security, it also represents a challenge inherent in automating analysis: verifying results. AI/ML models are far from foolproof. It would be irresponsible and, arguably, unethical not to closely scrutinize their conclusions. The challenge lies in finding the balance between adequately verifying results while also not sinking so much time into verification that the savings afforded by the AI are negated. While some organizations are taking steps to address this issue, such as The College of American Pathologists’ formation of a committee to establish laboratory standards in AI applications, no guidelines for verifying AI models used in labs exist as of yet.5

Another hurdle that labs may face incorporating AI/ML solutions is a lack of technical expertise. If a lab wishes to train an AI model on their own data, rather than using a pretrained one, it will require a person or persons with the technical knowledge to compile that data, prepare it for training, carry out the training, and then implement that trained model on an easy-to-use, reliable digital platform. For this reason, labs that don’t have in-house expertise or the budget to hire a contractor to carry out that work should stick with out-of-the-box AI solutions. While those solutions won’t include the lab’s internal data, the accessibility and ease-of-use may still prove valuable.

AI has the potential to revolutionize the way laboratories operate—and for some applications, it already has. With the current explosion in AI advancements, we can expect to see more innovations being unveiled. However, challenges such as data verification, accessibility, and the creation of clear, broadly applicable guidelines for implementing this technology must be addressed before this technology can reach its full potential. 


1.    Debnath et al. “Automated detection of patterned single-cells within hydrogel using deep learning.” Scientific Reports. 31 October 2022. Accessed 5 July 2023.

2.    “FathomNet.” Accessed 7 July 2023.

3.    Bueschl et al. “PeakBot: machine-learning based chromatographic peak picking.” Bioinformatics. 23 May 2022. Accessed 2 June 2023.

4.    Bach et al. “Joint structural annotation of small molecules using liquid chromatography retention order and tandem mass spectrometry data.” Nature Machine Intelligence. 19 December 2022. Accessed 19 June 2023.

5.    Blum, Karen. “A Status Report on AI in Laboratory Medicine.” American Association for Clinical Chemistry. 1 January 2023. Accessed 17 June 2023.