Labmanager Logo
Scientists look at a holographic AI-powered LIMS with icons floating in front of them

iStock, metamorworks

Looking Forward: How AI-Powered Integrated Data Analytics Could Strengthen LIMS

Analytic capabilities, cloud computing, and explainable AI are poised to create the next step in LIMS evolution

| 4 min read
Share this Article
Register for free to listen to this article
Listen with Speechify
0:00
4:00

Data analytics is being increasingly integrated into lab information management systems (LIMS), strengthening existing capabilities and adding others. Empowered by artificial intelligence (AI) and machine learning (ML), the latest LIMS iterations claim to increase accuracy and the speed of analysis. Future advancements are poised to incorporate generative AI (genAI) to provide what proponents hope eventually may become the equivalent of a second set of expert eyes to analyze and contextualize the results. Following are some key ways that LIMS are poised to evolve thanks to transparent AI, genAI, and cloud computing.

Boosting analytic capabilities

Embedded analytics is a key feature of modern LIMS that relies on findable, accessible, interoperable, and reusable (FAIR) data. This minimizes the need to export data from LIMS to other analytic systems, thus saving time. Embedding AI in LIMS allows users to act on decisions made through AI “and for the LIMS to react accordingly, as defined by the configured workflows,” says David Hardy, senior manager for data analytics and AI enablement at Thermo Fisher Scientific.

“This is about embedding AI within LIMS so it’s instant and part of an everyday process,” he adds. That, in turn, improves overall lab efficiency. Incorporating AI into data analytics allows existing tools to analyze more data and identify less obvious correlations that less sophisticated approaches may miss.

Another increasingly important feature, predictive analytics, takes insights a step further by using historical data and ML to predict outcomes and trends. These predictions, Hardy continues, “allow labs to anticipate issues, optimize resource allocation, and make informed decisions more quickly . . . which translates directly into cost savings and increased productivity.”

“The newest predictive models can also provide real-time analysis,” says Roosbeh Sadeghian, PhD, associate professor of data analytics, at Harrisburg University, says. For lab managers, predictive analytics can identify pending bottlenecks before they occur and predict future resource needs, for example.

Industry-wide, prediction capabilities will continue to improve, Sadeghian says. Citing drug discovery, an area that many researchers are working to elevate with AI, he continues, “We have lots of tools in the ML and AI area, and many [compound] libraries that can identify potential drug candidates and even predict their likelihood of success.”

The evolution of AI-enabled analytics is supplemented by the growth of a supporting technology: cloud computing.

Cloud-based, AI-enabled analytics is growing

“The cloud isn’t new,” Andre L’Huillier, PhD, assistant professor, computational social science, at Harrisburg University, acknowledges, but it is bringing qualitative changes to data analysis. “The idea of having everything very accessible, and then having these intelligent systems that integrate all of those aspects, is where we’re starting to get to new places.”

Explainable AI, when it is commercialized, promises to add transparency that earlier versions of AI sorely lacked.

Combining the analytic capabilities of AI-powered LIMS with the benefits of cloud computing democratizes access to these insights. After all, running AI models demands computational power. Many (if not most) labs will not have the budget, space, or in-house talent to run on-premises AI models. With a cloud LIMS, computation and storage needs can scale naturally with the lab. Overhead like security, upgrades, and maintenance will also be handled by the LIMS vendor.

Of course, all the benefits of AI remain hindered without transparency into how these models arrive at their output. This is why AI researchers are seeking to fine-tune explainable AI.

Soon, explainable AI

Historically, AI have arrived at their output directly from data, out of view of the developers. As IBM points out, “Not even the engineers or data scientists who create the [AI] algorithm can understand or explain what exactly is happening inside them, or how the AI algorithm arrived at a specific result.”1

Explainable AI, when it is commercialized, promises to add the transparency that earlier versions of AI sorely lacked. With explainable AI, lab managers finally will be able to see the reasoning and biases an algorithm used to reach its conclusions (what data was considered and how it was weighted, for example) to ensure the system is working properly, meet regulatory standards, or challenge data outcomes. Such transparency is instrumental to enable users to trust AI’s conclusions and to explain them to others.

Current explainable AI doesn’t yet provide full transparency, L’Huillier cautions. It does, however, provide insights at the foundational level for AI-based models. For example, because ML algorithms continue learning after they are trained, their conclusions may begin to drift as they are exposed to more and more data. Over time, that may affect the AI algorithm’s conclusions. Explainable AI lets even non-technical users check for drift and ensure that the key variables still carry their original weight. In traditional modeling, this would be like adjusting the coefficients or the weighting of various elements.

When explainable AI is ready for real-world applications in the lab, it may open the door to the widespread adoption of genAI.

The role of genAI

GenAI is unique from conventional AI in that it is designed to create new content by synthesizing data, rather than only analyzing existing data. With platforms like ChatGPT, GenAI is evolving from a curiosity to a platform that can support many types of tasks—including analysis. Consequently, generative AI will likely become a key feature in data analytics tools. With generative AI, scientists may be able to query their data in natural language, making the analysis process faster and more intuitive. Of course, explainable AI will be necessary for genAI analysis solutions to be fully adopted. Current genAI is prone to hallucinations, running against the reliability and consistency required for scientific work.

What lies next?

The next challenge in advancing AI-enabled LIMS will be acceptance, followed by technical harmonization, Sadeghian says. “It’s a bit tricky” to convince people that AI can help improve their work. That leaves an innovation gap that slows the uptake of many technologies.

Nonetheless, AI is becoming a valued tool in data analytics for life sciences labs. “By leveraging AI, business intelligence, and generative AI technologies, we are paving the way for more efficient, accurate, and insightful laboratory operations,” Hardy asserts.

References

  1. https://www.ibm.com/topics/explainable-ai

About the Author

  • Gail Dutton has covered the business of biotech since the industry’s early days, writing features, whitepapers, and other communications. She has presented comments at the National Defense University and the Genopole Conference near Paris, and writes regularly for the EBD Group, GEN, and other publications. 

Related Topics

Loading Next Article...
Loading Next Article...

CURRENT ISSUE - December 2024

2025 Industry and Equipment Trends

Purchasing trends survey results

Lab Manager December 2024 Cover Image
Lab Manager Analytical eNewsletter

Stay Connected with Analytical News

Click below to subscribe to Analytical Tools & Techniques eNewsletter!

Subscribe Today