A giant robot peers over a group of office workers, symbolizing the security risks of shadow AI use in the lab

Shadow AI: What It Is and How Lab Managers Should Respond

Like shadow IT, shadow AI can open your organization to legal, security, and regulatory risk

Written byHolden Galusha
| 2 min read
Register for free to listen to this article
Listen with Speechify
0:00
2:00

As previously reported in Lab Manager, research organizations are struggling to keep pace with the adoption of generative artificial intelligence (genAI), leading researchers to use it without clear guidance on what safe and secure usage looks like or the risks it entails.

But even so, acceptable use policies may not be sufficient alone to protect a lab’s data security. With or without policies, some employees may be tempted to use genAI tools that the organization has not officially sanctioned—a practice called shadow AI—and expose the organization and its data to risk in the process.

Shadow AI as a part of shadow IT

Shadow IT occurs when users in an organization use computer hardware or software that has not been vetted and approved for use, exposing the company to legal, regulatory, and data security risks. It can introduce vulnerabilities in an otherwise secure environment and is, in the words of CERN’s chief information security officer Stefan Lüders, “subject to basic security blunders.

Shadow AI is a subset of shadow IT, but is “riskier—way riskier,” according to Aditya Patel, a cloud security specialist at Amazon Web Services, writing for the Cloud Security Alliance. “Tools like ChatGPT, Claude, Mistral, and open-source LLMs like Llama and DeepSeek are too easy to use, too powerful, and too opaque,” he continues. Indeed, according to a survey done by CybSafe and the National Cybersecurity Alliance in late 2024, 38 percent of 7,000 respondents admitted to uploading protected data to AI chatbots without approval.

How should lab managers address shadow AI?

According to Patel, defining an acceptable use policy is a foundational first step to addressing AI use, sanctioned or not. “[Banning all AI tools] will stifle innovation, [and] it will be hard to keep up with new AI tools popping up every second these days,” he writes. The solution lies in effective governance and a culture of understanding and transparency. Employees use these tools because they are valuable to them; identifying safe use cases for AI chatbots and encouraging their use in that regard can help minimize the odds of rogue usage. As reported in previous Lab Manager coverage, safe use cases can include troubleshooting lab equipment, writing capital requests, creating meeting summaries, and drafting emails—provided that none of these use cases are handling confidential information.

Lab manager academy logo

Advanced Lab Management Certificate

The Advanced Lab Management certificate is more than training—it’s a professional advantage.

Gain critical skills and IACET-approved CEUs that make a measurable difference.

Following the creation of acceptable use policies, Patel recommends creating an “internal AI app store” with pre-approved AI tools available. Major AI providers offer private enterprise chatbot services for organizations that wish to keep their data private. Alternatively, organizations can run AI models in-house, although this does come with more administrative overhead and compute power demands.

Lab managers can also look into AI training programs to teach employees what the risks are and showcase responsible uses. Allowing users to learn how to use these tools effectively in a controlled environment is an investment in your lab’s future productivity. Finally, Patel emphasizes that regular audits and reviews of existing AI policies should occur. Lab managers can advocate for their lab’s needs as the organization’s IT carries out these audits.

About the Author

  • Holden Galusha headshot

    Holden Galusha is the associate editor for Lab Manager. He was a freelance contributing writer for Lab Manager before being invited to join the team full-time. Previously, he was the content manager for lab equipment vendor New Life Scientific, Inc., where he wrote articles covering lab instrumentation and processes. Additionally, Holden has an associate of science degree in web/computer programming from Rhodes State College, which informs his content regarding laboratory software, cybersecurity, and other related topics. In 2024, he was one of just three journalists awarded the Young Leaders Scholarship by the American Society of Business Publication Editors. You can reach Holden at hgalusha@labmanager.com.

    View Full Profile

Related Topics

Loading Next Article...
Loading Next Article...

CURRENT ISSUE - October 2025

Turning Safety Principles Into Daily Practice

Move Beyond Policies to Build a Lab Culture Where Safety is Second Nature

Lab Manager October 2025 Cover Image