The UK Research Integrity Office (UKRIO) recently released Embracing AI with Integrity: A Practical Guide for Researchers, a guide to using artificial intelligence (AI) to aid in research without affecting integrity. For lab managers, the advice in the guide can help inform their lab’s acceptable use policy and steer training efforts for AI.
A UK-based independent advisory body, UKRIO helps institutions navigate the challenges arising from upholding research integrity. After conducting a survey in 2024, UKRIO found a “strong demand for clear, practical support on AI in research.” AI is being adopted by researchers faster than organizations can develop acceptable use policies and guidelines, possibly introducing risks that can affect a lab’s reputation, regulatory compliance, and funding. For lab managers, these risks are not abstract concerns but direct responsibilities. The guide is framed around five key areas of risk and offers concrete advice for addressing each:
- Compliance and legal breaches: Using AI improperly can expose confidential information, undermine data protection and security principles, violate copyright or licensing agreements, and more. These risks can adversely affect a lab’s legal standing, regulatory compliance, and funding.
- Ethical considerations: AI models are a black box, meaning that there can be biases and discrimination baked into them that are not apparent to the user. Without accounting for this possibility, AI models can compromise research quality, inadequately address conflicts of interest, and blur the lines of accountability. Additionally, using AI models can be highly energy-intensive, raising additional concerns about the environmental effects.
- Protecting the research record: The guide lays out three areas of concern when it comes to maintaining a research record: (1) AI hallucinating nonexistent sources or incorporating low-quality research into its output, (2) a lack of transparency as to how the model arrived at a particular output, and (3) misuse by bad actors to commit research fraud.
- Research dissemination: While AI can be helpful in writing and disseminating research to the public, it can still hallucinate or misinterpret key details. Such errors can lead to accusations of research misconduct, highlighting the need for authors to be transparent about their AI usage throughout the entire publication process.
- Creativity and critical thinking: AI can help encourage the type of novel and divergent thinking that may lead to new breakthroughs, but too much use can undermine a researcher’s development of these skills. It will be important for lab managers to find the right balance between using these tools to support the ideation process and outsourcing it altogether.
UKRIO hopes that the new guide will help bridge the gap between AI adoption and the development of usage policies, equipping scientists and leaders alike to ask good questions, understand risks, and inform the creation of usage policies. “Researchers are encouraged to use the guidance for self-assessment, practice improvement, and ethical reflection,” UKRIO said in a press release shared with Lab Manager. Similarly, they encourage research leaders and lab managers to include the guide’s advice in their labs’ AI policies and training materials.










