Lab Manager | Run Your Lab Like a Business
Illustration of a brain set against a circuit board
iStock, KanawatTH

Artificial Intelligence Trends to Leverage in 2023

AI and machine learning tools are increasing their value to life science labs

by Gail Dutton
Register for free to listen to this article
Listen with Speechify
0:00
5:00

Artificial intelligence (AI) and machine learning (ML) are arguably two of the most important technologies affecting life science labs today. For the purposes of this article, think of AI as the means of extracting data and ML as a way to analyze and learn from it. Applications are proliferating and, whether the labs are dry or wet, AI and ML projects are either underway or likely will be soon. 

New tools are emerging to support quality control scheduling and planning as well as accelerate lab testing. Others support data set preparation, analysis, and modeling. Still others support drug target discovery.

Incorporating AI/ML into the workflow used to require coding skills and deep familiarity with computing systems. Lab managers still need to know how to access and share data across their system, but the AI/ML applications emerging today are almost plug-and-play. Implementing AI/ML in the lab has never been easier. 

Cloud computing is a prerequisite

Whether you’re starting from scratch or already use some AI/ML applications in your lab, now is the time to move your computing operations into the cloud if you haven’t already done so. 

“Cloud computing is the groundwork that allows for the consolidation of data from several instrument-based silos into a single repository from which historical and real-time information may be accessed…by distributed teams,” says Inga Broerman, vice president of marketing, BluLogix. 

One of the ways to do this is through Software as a Service (SaaS). “SaaS is an online cloud-based service that provides computing power on demand for a certain program,” Broerman says. “Its advantages are greatest for smaller facilities” that have limited compute power.

Implementing AI/ML into the lab has never been easier.

Another way is to work with your organization’s IT department to migrate all or part of your computing operations to the cloud. Whichever approach you choose, allowing your AI/ML applications to access the broad swath of data your lab handles will enable more comprehensive analyses and, hence, more accurate outcomes. 

AI/ML is the prelude to industry 4.0

The proliferation of AI/ML solutions for life science labs is accelerating and there will be a significant push from users who are eager to expand their own access to data, says Jo Varshney, DVM, PhD, founder and CEO of VeriSIM Life. As they and their colleagues break down the silos that have trapped their data, that data can have more widespread use at their own sites and by sibling teams and sites throughout their organization. 

Labs, consequently, can model extremely complex systems that were unwieldy before cloud computing, AI, and ML were integrated. For example, Amanda Randles, PhD, assistant professor at Duke University, performs a lot of physics-based modeling and simulations in her lab. “We’re seeing ML augment the simulation size, drive inputs to the simulation, and analyze outputs from the simulation. We’re seeing a group of ML algorithms that can take in disparate data—data from electronic health records, physics-based simulations, and wearables, for instance—and connect them.” 

With the influx of such vast quantities of dissimilar data, lab managers are starting to play a bigger role in data management, Randles says. The data is increasingly being stored in data lakes rather than siloed.

Solutions to training challenges

Huge quantities of data create a training challenge, however. “Even with a large computing system, it’s hard to do,” Randles says. “For example, we’re creating a large-scale simulation using some of the world’s biggest supercomputers. We’re using 5 petabytes of data for every time-step and are running a million time-steps, so it’s not physically possible to download all of that (to train the AI/ML). Therefore, we need a way to access that data while it’s being generated and train the model in the midst of the simulation without slowing the simulation.”

One approach leverages central processing units (CPUs) to create the visualization and hook the data while graphics processing units (GPUs) are building the simulation. Because many of the CPUs are essentially idling while the GPUs are doing the heavy compute work, this approach enables Randles’ team to access the data needed to train the AI/ML with little computing overhead.

Federated learning is another approach. Rather than bring data from multiple computer systems to a central unit for processing, it sends the AI/ML algorithm to the many computers involved in the project. Essentially, this method learns from subsets of independent data and then consolidates those learnings into a model or simulation.

Automation and transparency are improving

Another growing trend in the use of AI/ML is to fill in data gaps, which Varshney says can be very helpful. This application could not only inform research by making data-based assumptions, but identify assumptions that may need resolution before (for drug development) an Investigational New Drug package is developed much later. Using this approach to identify early processes that are not amenable to scaleup is another example in which AI can save organizations time and money. “Lab managers have so much information about the asset,” she notes. Leveraging it through AI/ML may provide early insight to plausible risks that could emerge later in an asset’s development cycle. 

Newer AI/ML solutions are transparent, clearly elucidating how their conclusions were derived. VeriSIM Life, a drug development company, created a translational index that helps understand how machines evaluate specific experiments. The index can then recommend subsequent experiments with an eye toward minimizing risks and redundancy. “We wanted to create a scoring system that’s relatable to users…so you don’t have to be an AI/ML expert,” Varshney says. “The goal is to make this the normalization standard for all the preclinical aspects (of our work).”

With the influx of such vast quantities of dissimilar data, lab managers are starting to play a bigger role in data management.

What remains to be accomplished is what Varshney calls “the proper integration of human intelligence with artificial intelligence.” She’s talking about the incorporation of inherent knowledge to enhance efficacy and enable more accurate predictions. “AI is only as good as the data,” Varshney says, “but it’s not just about the data.”

As lab managers incorporate AI/ML applications into their workflow, it’s important to really understand the assumptions and biases built into the algorithms to understand how their outcomes are derived. Does the data being analyzed now match the type of data the application was trained on? Are the inputs appropriate for this particular application? Is there enough variety in the training data for the results to be valid? Do the learnings of the ML application continue to be accurate? Do they make sense for the context in which they are being used? Are there relevant conditions or data points that should be incorporated into the algorithm or considered during the analysis?

These questions show the need for human interaction and validation throughout the applications’ lifecycles if they are to consistently add value to the lab. Implementing AI/ML applications in the lab is not a once-and-done endeavor. Even with the more powerful and intuitive applications that are emerging, human involvement remains vital.

AI/ML tools are emerging to fill in data gaps, streamline decision-making, and add insights to analyses, increasing their value to life science labs. But, while they are powerful tools that can do much in a lab, not even the best can replace human insights and judgment.