Lab Manager | Run Your Lab Like a Business

INSIGHTS

An image of a researcher in a lab coat, from the chest down, at a desk with a light blue-green grid and white icons overlaid to show the concept of AI in the laboratory.
The Revolution of Artificial Intelligence in the Digital Lab
iStock

The Revolution of Artificial Intelligence in the Digital Lab

In this Q&A article, two industry experts look at the fascinating and accelerating world of AI and ML

Artificial intelligence (AI) is increasingly permeating our online lives, but it’s more than just television shows or playlist suggestions. AI, machine learning (ML), and decision-making is infiltrating products and services, from social care to laboratory science. These technologies are now at the forefront of many future-ready laboratories and have optimized lab productivity and efficiency in ways never imagined. Like other revolutions that took hold fast, AI requires scrutiny if it is going to appropriately serve society. In this Q&A article, two industry experts look at the fascinating and accelerating world of AI and ML, discussing the role these applications could play in the lab of the future and other key learnings as we begin to enter a fully digitalized world.

Governance and ethical principles of AI and ML

Allison Gardner, PhD.

Allison Gardner, PhD, is the program director of the Data Science Degree Apprenticeship at Keele University, based in Newcastle, United Kingdom; and co-founder of the organization Women Leading in AI (WLinAI).  Gardner provides an overview of how AI is utilized in business across various sectors, as well as the current ethical implications of AI and ML technology. She also shares insight into the values of good governance and the requirements of robust regulation practices to achieve successful AI integration into businesses and society.   

Q: On the frontier of health services, and in general, how thoroughly is AI already a part of our lives?

A: AI is much more pervasive than people think it is in our lives. It is used in many sectors, from management systems in hospitals, to managing flight paths in the aviation industry and it even dictates the algorithms in our Netflix and social media. This is a response to big data and how we can utilize that information embedded in it to classify, predict, and improve the efficiency of a system. My concern is that people think of it as a bit of a cure-all, and only a system that will make life easier and augment human experience, but it is not as perfect as people think.

Q: You’re doing a lot of advocating for AI processes to receive diverse human oversight. What’s driving that?

A: Every AI system is inherently biased not just because of the data it holds, but because of the unconscious bias and sometimes conscious bias of the teams that develop these systems, and the policy that feeds into the rules that developers have to follow to make these AI powered systems. Every type of AI system that's involved with personal data will be biased, so mitigating the risks of this is integral.  

Q: This leads into your work with Women Leading in AI, can you tell us more about that?

A: I saw a significant gap between technologists, policy makers, and lawyers in addressing the problems that we have been seeing with AI systems, particularly with regards to algorithmic bias and the discrimination that can result from that. For instance, these algorithms can misclassify black women at much greater rates than white men, meaning these women in high-risk situations could forego necessary health care and benefits.  

Q: What was the reaction of the development community when you spoke to them about this?

A: I noticed that there was a lot of deflection by technologists, who insisted that AI is “a black box” that cannot be managed ethically. In response, I had a little bit of a tantrum because one of the key reasons for the deployment of biased algorithmic systems is the lack of diversity in the development teams for these products. Diverse development teams can identify the obvious mistakes that have been made, highlighting that the data is not diverse. With this in mind, I spoke with others who felt the same, and we decided to bridge this gap where we could bring leading thinkers in AI and leading thinkers in policy and government together, so we can fully understand these systems and actually start developing systems in an ethically aligned way. 

Q: What would be the best advice for people developing AI, to avoid falling into the bias trap?

A:  Ensuring diverse input and the engagement of all stakeholders in the design of new systems is integral to using this technology in an unbiased way. Reaching out to impacted and diverse stakeholders so they can have a meaningful involvement in the design of the process is crucial. For high-risk processes, there needs to be a point where if you have not had the application signed off from an independent auditor or an independent internal reviewer outside of the system confirming its suitability, then the system should not be deployed. I also advocate for a citizen-focus trust mark, not dissimilar for example to food labeling, fair trade, nutrition labeling, recycling labels, and such—so informing the person on the receiving end, “An AI system has been involved in this process, go and see this further information.” Ultimately, we can educate people on these issues, but technology is developing so fast that we cannot educate people quickly enough—we can only inform and empower them to be AI aware.  


Transforming scientific research labs with artificial intelligence, data science, and human-computer interaction

Paul Bonnington, PhD.

Paul Bonnington, PhD, professor and director of the Monash eResearch Centre is at the forefront of driving digital revolution in laboratory research. The Monash eResearch Centre (MeRC) is a technology research platform, which is a part of Monash University, based in Melbourne, Australia. Bonnington discusses the role of AI in the life sciences and health care space.


Q: Could you explain a little more about how you define e-research as a concept?

A: eResearch is best thought of as digital research. All aspects of the research process are undergoing a transformation and it is being applied to all domains—from the humanities, arts, and social sciences through to STEM disciplines such as engineering and medicine. These domains have all been fundamentally changed by digital technologies that are making their way into research, which is why the Monash eResearch Centre was established in the mid-2000s to help the university navigate this transformation, given that one of its core business areas is research.  

Q: AI is a big part of your work, but it’s also something the public don’t always understand, and sometimes even fears. What’s your take on how it’s best applied to medical research, or indeed generally?

A: I believe that the way to apply artificial intelligence is always to make sure that the human is involved. We can see patterns and data that a computer is not going to necessarily be able to find unless we tell it to look for those patterns. Personally, I find the most exciting application of AI is in computer vision and supported decision-making because it opens the potential for ordinary people to be able to apply decision-making AI tools that have been trained to think like experts in the field, and more importantly, they're able to do that from almost anywhere. This AI is described as deep learning—essentially training computer models by throwing at the computer lots and lots of data which has been annotated by experts. After a while, the computer model begins to “think” like those experts.  

Q: You and your colleague Kimbal Marriott were awarded an Agilent Thought Leader Award recently for your work on the interface between AI and lab instrumentation. Can you tell us a little about that?

A: Receiving the Agilent Thought Leader Award has enabled me to shift the focus in my own research. I started to see that there were applications of AI which were going to fundamentally change how people use scientific instruments. So, I became much more interested in the use of deep learning capabilities and the use of computer vision to help solve problems.  

Our team has been looking at the sample introduction area of an instrument which consists of tubes, spray chambers and nebulizers. There are a few challenges that could arise in the sample introduction area, but we’ve been collaborating on a project with Agilent where we're using computer vision to see these potential obstacles before the operator does. In doing so, we will be able to warn operators that the instrument might require attention, [for example], a component needs a re-attachment or that the nebulizer needs to be clear.  

Q: The potential benefits to efficiency and decision-making seem clear, but what are the challenges you face in this frontier of research?

A: A big challenge of our own work is the fact that the people that benefit from our capabilities, techniques, and infrastructure are generating more and more data. Consequently, it is difficult to keep up with the growth we are experiencing in the generation of new data. We therefore need to know if anything we generate will be useful at all, which is an equally complex problem because often data to any human is going to look like noise, but it might be that a hidden gem is in there somewhere. So, this is where AI can also help—it can provide algorithms and models to help pre-screen the data to give you a good indication of whether anything useful is likely to be found in it.  

Q: Data privacy legislation is another factor to consider too—how are you navigating that?

A: Data privacy is certainly a growing concern for us as it means that we will need to be much more sophisticated with our technical infrastructure and our technologies to ensure that we can maintain privacy. We can train very sophisticated AI models by using all the data we acquire, but we need to ensure that the data is used for the purposes that it was obtained for and we have to ensure that the people who have access to that data are able to maintain privacy. This is really important when we're working with industry partners, as we want to establish and maintain trusting relationships within our industry, and moreover, the opportunities for industry and university research growth lies in the re-use of data from both organizations.  

Summary

Advances in AI and machine learning have certainly allowed for a fast-moving and exhilarating journey toward the digital lab. While there is always a flipside to every exciting innovation, particularly in technology, some of the wider challenges—for example around biases in ML, which can lead to misclassification of subjects—are now becoming better understood and therefore can be addressed appropriately and ethically in the revolution of AI.