Artificial intelligence (AI) continues to transform how laboratories conduct research and manage data, but not every scientist is on board. Elsevier’s Researcher of the Future report finds that one-third (33 percent) of corporate researchers have not yet used AI for work, signaling an opportunity to expand responsible AI adoption across research and development.
Among those using AI, the reported advantages are significant. Sixty-three percent say AI tools save them time, 54 percent believe the technology empowers them, and 47 percent say it brings greater autonomy. Looking ahead, 76 percent expect further time savings over the next two to three years. Nearly half predict AI will drive new knowledge (49 percent) and improve research quality (44 percent)—demonstrating optimism about AI’s role in accelerating scientific discovery.
Training and governance remain major obstacles
Despite the potential, the study highlights persistent barriers that prevent wider AI use in laboratories. Only 35 percent of corporate researchers report receiving adequate training, while just 41 percent believe their organization maintains good AI governance. A further 21 percent disagree, suggesting a lack of clear oversight and accountability.
These findings echo results from a Lab Innovations report, published by Lab Manager, which similarly found that training deficits and weak governance remain two of the biggest hurdles to responsible AI adoption.
In Elsevier’s report, quality concerns also remain high. While 46 percent of respondents say AI provides useful answers, 29 percent find its outputs unhelpful, and only 27 percent consider AI tools trustworthy. These doubts have tangible effects: many researchers avoid using AI for high-value applications such as drafting papers, generating hypotheses, or designing experiments.
Scientists want transparent, research-specific AI
To improve trust and accelerate adoption, respondents identified several features that would make AI tools more reliable for research environments:
- Seventy percent want automatic citation and transparent sourcing
- Sixty-four percent seek explicit factual accuracy and safety training
- Sixty-three percent emphasize confidential handling of research inputs
These priorities point to a growing demand for research-specific AI solutions that meet the same standards of accuracy and reproducibility as scientific work itself.
“AI has enormous potential to accelerate discovery, but general-purpose tools were never built for the precision and traceability that scientific research requires,” said Stuart Whayman, president of corporate markets at Elsevier. “As this study shows, researchers need transparent AI that cites trusted sources and explains its reasoning. Above all, it must meet the same standards of evidence and reproducibility as their own work. Achieving that depends on domain-specific data, rigorous validation, and collaboration across the research ecosystem.”
Lab Management Certificate
The Lab Management certificate is more than training—it’s a professional advantage.
Gain critical skills and IACET-approved CEUs that make a measurable difference.
What this means for laboratory leaders
For laboratory managers and R&D directors, Elsevier’s findings reinforce the importance of pairing technological innovation with governance and workforce development. Strengthening data policies, providing ongoing AI training, and evaluating vendors for transparency can help laboratories build trust and maximize value.
As AI becomes more deeply integrated into research workflows, laboratories that focus on responsible implementation—through training, transparency, and ethical oversight—will gain the greatest advantage.
This article was created with the assistance of Generative AI and has undergone editorial review before publishing.










