The terms artificial intelligence (AI) and machine learning (ML) receive a varied response from lab scientists. From “That’s not a concern for me,” to excitement to fear that organizations are building up these technologies to replace them, AI/ML should be on the radar of every scientist. While far from advanced enough to replace labs full of scientists, AI/ML enable cross-project collaboration and help achieve new breakthroughs.
Machine learning and artificial intelligence are not new
The current excitement around AI/ML can make it difficult to understand what’s real. While the term “AI” is used to describe many different technologies—some more accurately than others—the general definition of AI is technology that enables computers and machines to simulate human learning, problem solving, and comprehension. ML is a subset of AI that uses mathematical algorithms to extract patterns from specific datasets and “learn” from the data to provide analyses, predict future trends, and perform other applications. While these technologies have been evolving for decades, with the first foundational methods tracing as far back as the 1950s, modern AI/ML technologies have only been widely adopted in labs within the past decade or two.
Advanced Lab Management Certificate
The Advanced Lab Management certificate is more than training—it’s a professional advantage.
Gain critical skills and IACET-approved CEUs that make a measurable difference.
The revolution of recent years has been a result of the growing availability of large datasets, new algorithmic techniques, and increased computing power. This combination has meant a rapid increase in AI/ML tools and investigations into where and how they can be applied. In the late 2000s, organizations started putting together “proof-of-concept” projects to identify where AI/ML could be applied to provide improvements in efficiency and productivity, with the end goal of accelerating innovation and improving the ROI of R&D.
The impact of AI
The full impact of AI in scientific R&D has yet to emerge, but we are starting to see results. The 2024 Nobel Prize in Physics was awarded to scientists who developed ML technology to predict the 3D structure of proteins based solely on their amino acid sequence. The 200 million predicted protein structures released to the science community hold promise for structural biology, drug discovery, protein design, protein function prediction, and more.
An increasing number of AI-discovered drugs and vaccines have been entering clinical trials over the last five years. AI has been used to identify drug targets, design small molecules, and repurpose existing drugs. Phase I success rates are 80–90 percent, substantially higher than the historic industry average of 66–76 percent, while Phase II results, on a limited sample size, are comparable to historic industry averages (40 percent).1
Of course, while it does offer great value in some applications, AI still lags behind humans in others. AI excels in pattern recognition but is not as adept at creative thinking and finding novel solutions. For instance, at one scientific conference I attended, leading scientific informatics organizations presented projects in which AI applications suggest new molecular space for exploration within hours, but they also admit that the best results were suggested by a team of brainstorming scientists with expertise in the area. Ultimately, AI excels at pattern recognition, while humans still perform better at creative thinking and arriving at novel solutions.
Be curious and choose a balanced approach
A scientist’s greatest asset is their natural curiosity, which AI/ML cannot (yet) replace. AI/MR will, however, continue to shape the way research is conducted. To stay ahead, however, a strong foundation in data literacy will be increasingly important. Since AI models are only as reliable as the data they are trained on, recognizing issues related to data integrity, biases, and limitations will allow a critical assessment of AI-generated results. Maintaining scientific rigor in the application of AI/ML is just as crucial as conducting experiments in the lab.
At the same time, it is important to evaluate when and where AI is truly beneficial. Not every scientific problem requires AI, and in some cases, traditional statistical methods may be just as effective, if not more reliable. Approaching AI with a critical mindset—questioning its necessity and potential pitfalls—ensures that it is used as a tool to enhance research rather than introduce unnecessary complexity.
While AI/ML offer exciting possibilities, they do not replace scientific expertise. These technologies rely on patterns and historical data but lack the depth of reasoning, creativity, and contextual understanding that human scientists bring to discovery and development. The best approach is to remain open-minded but skeptical, using AI where it can enhance efficiency while continuing to rely on scientific intuition and peer review to validate results.
The future of scientific discovery will be shaped by those who understand how to harness these technologies effectively while maintaining the fundamental principles of research and innovation.
References
1. Kp Jayatunga, M., Ayers, M., Bruens, L., Jayanth, D., & Meier, C. (2024). How successful are AI-discovered drugs in clinical trials? A first analysis and emerging lessons. Drug Discovery Today, 29(6), 104009. https://doi.org/10.1016/j.drudis.2024.104009