As previously reported in Lab Manager, research organizations are struggling to keep pace with the adoption of generative artificial intelligence (genAI), leading researchers to use it without clear guidance on what safe and secure usage looks like or the risks it entails.
But even so, acceptable use policies may not be sufficient alone to protect a lab’s data security. With or without policies, some employees may be tempted to use genAI tools that the organization has not officially sanctioned—a practice called shadow AI—and expose the organization and its data to risk in the process.
Shadow AI as a part of shadow IT
Shadow IT occurs when users in an organization use computer hardware or software that has not been vetted and approved for use, exposing the company to legal, regulatory, and data security risks. It can introduce vulnerabilities in an otherwise secure environment and is, in the words of CERN’s chief information security officer Stefan Lüders, “subject to basic security blunders.”
Shadow AI is a subset of shadow IT, but is “riskier—way riskier,” according to Aditya Patel, a cloud security specialist at Amazon Web Services, writing for the Cloud Security Alliance. “Tools like ChatGPT, Claude, Mistral, and open-source LLMs like Llama and DeepSeek are too easy to use, too powerful, and too opaque,” he continues. Indeed, according to a survey done by CybSafe and the National Cybersecurity Alliance in late 2024, 38 percent of 7,000 respondents admitted to uploading protected data to AI chatbots without approval.
How should lab managers address shadow AI?
According to Patel, defining an acceptable use policy is a foundational first step to addressing AI use, sanctioned or not. “[Banning all AI tools] will stifle innovation, [and] it will be hard to keep up with new AI tools popping up every second these days,” he writes. The solution lies in effective governance and a culture of understanding and transparency. Employees use these tools because they are valuable to them; identifying safe use cases for AI chatbots and encouraging their use in that regard can help minimize the odds of rogue usage. As reported in previous Lab Manager coverage, safe use cases can include troubleshooting lab equipment, writing capital requests, creating meeting summaries, and drafting emails—provided that none of these use cases are handling confidential information.
Advanced Lab Management Certificate
The Advanced Lab Management certificate is more than training—it’s a professional advantage.
Gain critical skills and IACET-approved CEUs that make a measurable difference.
Following the creation of acceptable use policies, Patel recommends creating an “internal AI app store” with pre-approved AI tools available. Major AI providers offer private enterprise chatbot services for organizations that wish to keep their data private. Alternatively, organizations can run AI models in-house, although this does come with more administrative overhead and compute power demands.
Lab managers can also look into AI training programs to teach employees what the risks are and showcase responsible uses. Allowing users to learn how to use these tools effectively in a controlled environment is an investment in your lab’s future productivity. Finally, Patel emphasizes that regular audits and reviews of existing AI policies should occur. Lab managers can advocate for their lab’s needs as the organization’s IT carries out these audits.











