A picture of a tablet computer with the icons for various leading AI chatbot applications displayed

When and When Not to Use AI Chatbots in the Lab

Exploring the safest and riskiest lab use cases for LLM-based chatbots

Written byHolden Galusha
| 5 min read
Register for free to listen to this article
Listen with Speechify
0:00
5:00

At the 2025 Lab Manager Leadership Summit, lab leaders discussed both the promises and pitfalls of AI chatbots like ChatGPT. Their conversation highlighted safe, risky, and outright hazardous uses that lab managers can consider in deciding when and how to effectively integrate these tools into their labs.

Safe use cases for chatbots

While there should always be a human in the loop, there are use cases that have minimal chance of negative downstream effects on the lab because they’re not directly tied to research/product quality or operations.

Lab manager academy logo

Advanced Lab Management Certificate

The Advanced Lab Management certificate is more than training—it’s a professional advantage.

Gain critical skills and IACET-approved CEUs that make a measurable difference.

Brainstorm troubleshooting approaches

There are two ways in which AI can help with troubleshooting:

  1. Performing basic internet searches, and
  2. Considering all possible avenues for solutions

Every major AI chatbot on the market—ChatGPT, Gemini, Claude, etc.—has internet browsing capabilities. Armed with just a brief description of the problem, the bot can synthesize an outline of the possible causes and further diagnostic steps derived from the top results across the internet.

Additionally, chatbots excel at divergent thinking, which makes them particularly well-suited for ideating a wide range of causes. In fact, according to a 2023 paper published in Scientific Reports, large language models outperformed human participants on average in the alternate uses task, which is a test used to assess how well an individual can generate novel ideas. (That said, the researchers also found that the “best humans” still outperform AI.) If you’re trying to trace down the cause of an obscure technical issue, a chatbot can be a very helpful resource.

Writing business cases and capital requests

Writing business cases and capital requests is a necessary—but not exactly fun—part of being a lab manager. The more you can offload this work, the more time you’ll have for the science. “A lot of times,” remarked one roundtable attendee, “you can just put the basics of your capital request in a chatbot and tell it to [re]formulate it in a way that resonates with the finance team or the leadership team. And it can take your core concepts and reframe them in a way that works for other disciplines in your organization.” Even if the bot’s output still needs to be tweaked, it gives you something to iterate on and can help you approach the capital request through different lenses.

Interested in lab tools and techniques?

Subscribe to our free Lab Tools & Techniques Newsletter.

Is the form not loading? If you use an ad blocker or browser privacy features, try turning them off and refresh the page.

By subscribing, you agree to receive email related to Lab Manager content and products. You may unsubscribe at any time.

Creating meeting summaries

Making time for all the meetings you’re invited to can be difficult. “Say there’s a meeting that you don’t really need to attend, but you still want to get a feel for,” another attendee remarked. “You can use AI to generate a ‘SparkNotes’ summary of the meeting.”

Drafting emails

Email was another opportunity for AI that was mentioned in the roundtable. “Microsoft Copilot is very helpful with summarizing emails and drafting quick ones as well,” one attendee said. Feeding the key points of what you’d like to say into a chatbot and instructing it to rewrite them into an easy-to-read, structured email is an easy way to save time. What’s more, some chatbot services like ChatGPT now offer persistent memory so you can instruct your bot to learn and mimic your writing style across chats, which will help it generate emails that sound more like you.

Risky use cases for chatbots

There are other use cases for chatbots that may have adverse effects if not used wisely, such as threatening research quality or compromising your understanding of a subject. So, proceed with caution when using AI in these ways:

Summarizing scientific literature

Since the launch of ChatGPT and other large-language-model-based (LLM) applications, some startups have cropped up promising to help scientists research and keep up with evolutions in their fields by leveraging LLMs to summarize key takeaways from scientific studies. However, LLMs are prone to misunderstanding said research or hallucinating fake information altogether.

If you do use AI to summarize scientific literature, make sure to validate what the bot has told you. In a 2024 Nature article, Andy Shepherd, scientific director at Envision Pharma Group, compared several AI-driven summarization tools. “All the platforms produced something that was coherent and read like a reasonable study, but a few of them introduced errors, and two of them actively reversed the conclusion of the paper,” he said.

Of course, after a certain point, taking the time to confirm AI output will negate the time savings of using it in the first place.

Writing code

AI-assisted software development has been hailed as one of the flagship use cases across industries. While helpful, it’s still a far cry from replacing professional software developers. Paul Bauer, a senior software engineer at Datadog, has experienced as much when using AI: “The code [the chatbot] generated the first time had issues and I had to iterate on the prompts quite a bit . . . when it eventually produced something satisfactory, I needed to heavily refactor the output to make the code ergonomic and production ready.”

But Bauer still considers it a win for AI-assisted coding tools. “[If] you need to write a one-off macro or a script . . . it can be faster to ask [a chatbot] to do the thing for you.” Indeed, scientists working in languages like R or Python may also find AI-assisted coding a boon to their workflows.

Ultimately, there are two principles to using AI to program computers:

  1. The scope of the program must be highly constrained and well defined. AI does not navigate ambiguity well; you must be able to articulate exactly how the code should transform that genomic dataset in your bioinformatics workflow, for example.
  2. You must be able to program, yourself. Even minute errors in code can lead to very skewed results (especially in the case of statistical analyses). A rule of thumb: If you cannot read and understand the code itself, you cannot truly validate its output and be safe to use it. After all, even if the first few test runs produce clean results, what if an edge case appears that the AI did not account for? Would you know how to adjust the code to account for that edge case?

Hazardous use cases for chatbots

As you experiment with chatbots for various tasks, it’s essential to recognize their limitations and potential risks. Two use cases in particular warrant caution or avoidance altogether: performing mathematical operations and analyzing confidential information.

Performing calculations

LLMs were not designed to be calculators. While they occasionally deliver accurate calculations, their output can be subtly (or glaringly) inaccurate just as often. “Despite [natural language] advancements, the domain of mathematics presents a distinctive challenge, primarily due to its specialized structure and the precision it demands,” notes one 2024 open access paper entitled “Can LLMs Master Math? Investigating Large Language Models on Math Stack Exchange.” Such errors can adversely affect research, where precision and accuracy are paramount.

Processing confidential information

Secondly, labs must avoid using general-access bots for handling confidential, secure, or personally identifiable information (PII). “In a clinical lab setting,” one attendee remarked, “you have to be cognizant of HIPAA laws and whatnot when you’re utilizing these new tools. I think there’s a desire to do more with it, but [there are] concerns as far as security goes.”

Indeed, using general-access bots to handle sensitive data poses serious risks to privacy and compliance. Unless your organization has a secure chatbot, inputting such information into publicly accessible models is highly inadvisable. One roundtable attendee remarked that their company rolled out an internal chatbot that allowed them to upload sensitive data safely, a solution that many organizations will need to consider if they wish to take full advantage of AI. Lab managers should consult with their legal team if they are unclear about how privacy laws may interact with AI.

Security risks further compound these privacy concerns. Uploaded documents reside on external servers beyond your organization's control. This exposes sensitive information not only to potential internal views by platform providers but also to external threats from hackers. For instance, in March 2023, OpenAI experienced a data leak exposing names, addresses, emails, and partial credit card details. Such vulnerabilities underscore the importance of exercising caution.

Chatbots hold significant promise for lab efficiency, but their limitations demand careful consideration. By strategically choosing where and how to deploy these tools—and steering clear of inherently risky tasks—lab managers can maximize the benefits while safeguarding their work and data security.

About the Author

  • Holden Galusha headshot

    Holden Galusha is the associate editor for Lab Manager. He was a freelance contributing writer for Lab Manager before being invited to join the team full-time. Previously, he was the content manager for lab equipment vendor New Life Scientific, Inc., where he wrote articles covering lab instrumentation and processes. Additionally, Holden has an associate of science degree in web/computer programming from Rhodes State College, which informs his content regarding laboratory software, cybersecurity, and other related topics. In 2024, he was one of just three journalists awarded the Young Leaders Scholarship by the American Society of Business Publication Editors. You can reach Holden at hgalusha@labmanager.com.

    View Full Profile

Related Topics

Loading Next Article...
Loading Next Article...

CURRENT ISSUE - May/June 2025

The Benefits, Business Case, And Planning Strategies Behind Lab Digitalization

Joining Processes And Software For a Streamlined, Quality-First Laboratory

Lab Manager May/June 2025 Cover Image