The complexity of science that modern labs are tasked to investigate is constantly increasing. To more effectively explore that complexity, labs benefit from implementing AI technology to speed up decision-making, improve workflows, and spot subtle trends in data. However, many labs are unsure of how to approach AI applications. For a broader perspective on the impacts of AI technology on labs, we spoke with Christian Olsen, associate vice president, industry principal, biologics at Dotmatics.
Q: Can AI experimentation replace the lab? If not, in what ways do you see AI enhancing hands-on experimentation?
A: Physical experimentation remains irreplaceable when it comes to validating hypotheses, exploring biological complexity, and addressing unexpected outcomes. One area where AI seems especially valuable is augmenting experimental workflows—identifying promising directions, narrowing the parameter space, and accelerating iteration cycles. Rather than eliminating benchwork, AI enhances it by optimizing where time and resources are spent. Think of AI as an experimental partner or force multiplier—one that facilitates evidence-based prioritization without removing the judgment of the scientist or the lab's critical function.

Christian Olsen
Q: What are the biggest challenges in integrating AI into lab workflows, and how can lab teams overcome them?
A: The biggest challenges include unstructured data, lack of standardization, and low interoperability of laboratory systems. Most research organizations still grapple with siloed datasets that prevent them from leveraging the full potential of their historical and experimental data. Even when the data are present, it's often in unstructured formats or incompatible systems that AI tools cannot easily process.
Overcoming these barriers requires investing in infrastructure that supports structured, FAIR-aligned data capture, that facilitate collaborative bridges between domain scientists, informatics professionals, and data engineers. At the same time, labs must promote a culture of digital literacy—where scientists can see the value in how AI works and under which conditions it works optimally. This is where we see areas like training and change management as essential priorities to ensuring AI tools are adopted and effectively integrated into research pipelines.
Q: Can you explain how wet lab data is structured, standardized, and integrated in a way that continuously improves AI models?
A: The key component is data governance—setting clear rules to keep data organized, consistent, and easy to track. Lab-in-a-Loop relies on appropriately defined data pipelines that transform raw lab output into a form consumable by AI. This means making sure data from different tools and tests all follow the same format, using consistent labels and details, and storing everything in one system where it can be easily accessed in real time or in the future.
In addition to standardization, traceability is crucial. To learn effectively, AI must understand the context in which data were generated. This means linking data back to experimental protocols, laboratory conditions, and reagents used. All this information enables AI models not only to make predictions, but also to offer explanations for the predictions and identify patterns that would otherwise be overlooked.
Q: Beyond reducing costs and timelines, how is AI helping improve the success rates of drug discovery, and what factors still limit its full potential?
A: AI contributes to higher rates of success through refining candidate selection, enabling deeper insight into molecular formats and mechanisms, and facilitating earlier detection of off-target effects. It's also being applied in predictive toxicology, patient stratification, and virtual screening. By reducing trial-and-error, AI enables scientists to exclude dead ends earlier on (e.g., fail fast), and focus time and resources on the most promising candidates.
Fragmented workflows between wet and dry labs cause significant inefficiencies, especially during data handoffs. Manual processes such as spreadsheets and email introduce delays, data loss, and reproducibility challenges. Without seamless integration between experimental and computational systems, dry lab teams often receive incomplete or poorly annotated data, leading to time-consuming reformatting. This disconnect hinders collaboration between biologists, chemists, and computational scientists, impeding the sharing of insights and slowing discovery.
Q: What do lab managers need to know or learn about AI tools to make good decisions about integrating them into their labs?
A: Lab managers are not required to be data scientists, but they should be familiar with the foundational principles behind AI—what it can do, what it requires, and what its limitations are. This involves knowledge of the big picture about data integrity, data standardization, and system compatibility. For example, if data lacks metadata or is logged so that it is incomprehensible from one system to another, even the most advanced and sophisticated AI models won’t deliver the meaningful insights scientists need.
It's also important to assess the usability of AI tools. Tools that are too complex or not properly integrated into existing workflows will not gain traction. Managers must think about how well AI tools facilitate their specific scientific goals, how easy they are for teams to use, and what support is in place to ensure proper adoption. Fostering a culture of collaboration between scientists and informatics professionals across organizations is critical for long-term success.
Q: Looking ahead, how do you see AI shaping the future of drug discovery over the next decade? What advancements or challenges do you anticipate?
A: As the life sciences race into increasingly complex territory—cell and gene therapies, mRNA vaccines, multispecific antibodies—the sheer volume and variety of research data are exploding. However, today, most labs are still relying on legacy systems that weren’t built for this scale or complexity. To stay competitive, accelerate innovation, and drive impactful discoveries, life sciences companies will need a new or improved foundation; one that unifies data, adapts to evolving science, and connects every team and tool across the full R&D lifecycle. We expect to see broad adoption of multimodal scientific intelligence platforms built specifically to bridge the gaps that traditional systems leave behind.
Over the next decade, AI will become embedded throughout all phases of drug discovery, from target selection and molecular design to clinical strategy. We anticipate the use of autonomous labs, more sophisticated predictive models, and increased integration of multiomic and patient data. But advancing this future will also require addressing challenges in the ethical data use, model validation, and equitable access to AI infrastructure. Institutions will need to prioritize transparency in AI decision-making, establish guidelines for responsible data governance, and invest in building talent in both science and data disciplines. Ultimately, AI will not replace human expertise—it will amplify it, enabling researchers to navigate biological complexity with greater breadth, clarity, speed, and precision.
AI is a powerful lab tool that enables labs to tackle very complex science and drive meaningful results. Lab managers need to partner with data scientists to integrate systems that provide the types of structured data AI tools require. A strong collaboration around the data will enable labs to use AI to drive important new discoveries.