Stuart Whayman, president of corporate markets at academic publisher Elsevier, offers a practical perspective on applying artificial intelligence (AI) tools in the wet lab. As he notes in a conversation with Lab Manager, “Researchers will typically be using pre-built models with ChatGPT-style interfaces for use cases like analyzing the results of their experiments or planning an experiment and comparing methodologies.” With accuracy and reproducibility as non-negotiables, Whayman emphasizes that lab managers must ensure the quality and security of the data they feed into AI systems while keeping human oversight in the loop.
Here are seven quick and actionable tips for AI in your lab:
1. Validate your data sources
Researchers “must be sure that any lab data inputted into an LLM [large language model] interface is trustworthy, traceable, and verified—including data from external sources,” Whayman says. This means confirming experimental data is sound and validating inputs from collaborators such as contract research organizations. Poor-quality or unverified inputs can quickly lead to flawed outputs, magnifying errors rather than solving problems.
2. Apply robust data hygiene standards
Whayman advises that “whenever third-party data is used, lab researchers must check that the data adheres to established guardrails and standards like FAIR data, as well as policies relating to responsible AI use.” The FAIR principles—Findable, Accessible, Interoperable, and Reusable—support reproducibility and prevent skewed AI-generated insights caused by incomplete or inconsistent data.
3. Use private, secure AI models
According to Whayman, researchers should make sure that any LLM they employ is private and secure, hosted either locally running on hardware that the organization owns or on enterprise, sandboxed instances from LLM providers. These secure models help protect sensitive intellectual property and minimize compliance concerns.
4. Check for responsible AI compliance
Adopting Whayman’s guidance means putting policies in place that align with both internal governance and external regulations. Responsible AI compliance safeguards data integrity and ensures that ethical and legal standards are met at every step of AI use.
5. Keep a human in the loop
“LLMs, when used in labs and R&D organizations, should not be blindly trusted or treated as a black box,” Whayman warns. Always have domain experts review AI outputs to verify results before they influence decisions. This practice safeguards experimental integrity. Even the most advanced LLMs can generate plausible but incorrect answers, making human validation essential.
6. Leverage retrieval-augmented generation (RAG)
With current AI technology, hallucinations cannot be fully eliminated, but there are techniques to reduce them. Whayman notes that “using data frameworks such as retrieval-augmented generation can significantly reduce hallucination risk.” RAG grounds the model’s responses in trusted, context-specific sources provided by the user, boosting accuracy and reliability.
7. Document your AI processes
Maintaining a record of how, when, and why AI is used in your workflows improves reproducibility and supports compliance audits. Thorough documentation also aids in refining processes and troubleshooting issues over time.











