Lab Manager | Run Your Lab Like a Business

Insights

Female scientist in life science lab looking over data on laptop
Credit: IDBS

Effective Data Management Key to Accelerating Drug Discovery and Development

Three considerations to bring insights to the bench

by
Alberto Pascual, PhD

Alberto Pascual is the Director of Data Science & Analytics at IDBS. He is a doctor in bioinformatics with long experience in data Science and biomedical/biopharma domains. Graduated in computer...

ViewFull Profile.
Learn about ourEditorial Policies.
Register for free to listen to this article
Listen with Speechify
0:00
5:00

Biopharmaceutical labs and facilities are filled with millions of dollars of specialized equipment, hard-won intellectual property, and drug products with the potential to change lives. Still, the most important asset in any lab will always be the data and resulting knowledge developed by biopharma teams conducting early discovery, managing drug development, providing the process understanding and optimization, and preparing the organization for scale-up and manufacturing. The ultimate goal is to deliver life-improving therapies to patients. 

In turn, to do their best work, it is vital to have a technology and collaborative approach in place for bench scientists to have excellent, accessible data in hand. With good data, researchers can use machine learning (ML) and artificial intelligence (AI) to unlock insights that would have been impossible a few years ago. Over the next decade, the combination of top talent and accessible data has incredible potential to accelerate drug discovery and development.

Yet in many labs, bench scientists are still working with their hands tied. Siloed, messy data—often organized without scientists' needs in mind—puts deeper learning frustratingly out of reach. For labs that are mapping out a plan for digital transformation, here are three key considerations to guide their journey.

1. Do you have a vision for clean, contextualized data?

Machine learning, artificial intelligence, digital twins, and other prescriptive advanced analytics offer tantalizing possibilities for accelerating drug discovery, development, and manufacturing. But even the most cutting-edge analytics techniques are only as good as the data they build on—in fact, they’re completely reliant upon it. Before jumping in and exploring shiny new approaches, labs need to be honest about what their data looks like under the hood and the practices around how it's managed. 

Legacy data systems were not designed with today's analytics use case requirements in mind, and even the well-respected, global firms are struggling to keep up. One top-10 contract development and manufacturing Organization (CDMO) recently estimated that their internal data science team spends 90-95 percent of its time cleaning data. Due to the lack of standardization inherent in a paper-based system—let alone peer-to-peer technologies talking to each other—it takes hours of manual work to get data from DoE-driven process characterization campaigns, clean it, and structure it for analysis. This is all necessary before data science teams can even begin to apply their skillsets to leverage data for solving their organization’s business challenges.

Even in labs that have left paper behind and have gone through multiple waves of digital transformation, data may not be as clean as it appears at first glance. A data architecture designed to facilitate a paper-on-glass approach or to meet regulatory requirements can still lack the context needed to drive useful prescriptive insights.

One tool for assessing data quality is the Pistoia Alliance’s FAIR Toolkit for the Life Science industry, which identifies the qualities of findability, accessibility, interoperability, and reusability as foundational in its approach. The toolkit helps teams evaluate their readiness and processes for data quality, as well as provides methodology, best practices, and use case examples on how others in the industry have implemented it. 

Ideally, labs should connect instruments, standardize data, and store data centrally through the entire drug lifecycle. This should include integrating all the context needed to make the most of insights and automation.

One top 10 CDMO research leader explained, “In the short term, you’re dealing with relatively simple data exchanges about a particular molecule. But then over time, you’re building up a dossier of information about that molecule. And we’ve got that funnel of information, of data—where you start very small, but by the time you’re even thinking about the IND you’re gathering a huge amount of data about that molecule… kinetic information, toxicology information. Imagine a package of information you need to present to the FDA.”

At these in-between stages, it is important to build a data backbone with the strength and flexibility to support future goals. To successfully drive insights down the line, data needs appropriate context, unambiguous meaning, and interoperability with other data sets. In this stage, figuring out the right data backbone approach is crucial, and scientists must be involved in the design process.

2. Is IT accountable to the bench?

In any big digital transformation project, it is important to remember that scientists and end users may be skeptical. In many labs, scientists have watched wave after wave of digital transformation come and go. Often, these projects promise value for scientists but ultimately deliver for other stakeholders. 

Despite clear opportunities to improve the work life of researchers, data transformation projects are often the responsibility of the IT team. This is a carryover from earlier digital projects and daily responsibilities, where application/informatics management was the main objective.

When the goal was to capture and store information for simple, backwards-looking recordkeeping purposes, digital transformation was a relatively straightforward task that fell cleanly within IT's wheelhouse. But when the goal is to go beyond IP storage and retention, structuring data to solve new scientific problems with these new analytical and statistical practices, scientists need to be deeply involved from the start. The stakes are high: because a new drug entity is essentially defined by process data, data that works well for bench scientists is not a nice-to-have.

Most large biopharma organizations currently have a data analytics and insight program or center of excellence in place, but these are often run by IT or groups adjacent to research and development. Small-to-midsized biotechs often don’t have any data science or analytics functions within R&D besides their general IT support. These organizational structures often lead to competing priorities between departments, and the benefits of analytics initiatives do not always reach end users doing science. 

To successfully democratize data access and self-service analytics to bring insights to the bench, the functions responsible for data initiatives should be deeply accountable to scientists. At large organizations, that might look like an IT and analytics group embedded within R&D with the authority to oversee a digital transformation roadmap. At smaller organizations, it might be a cross-functional project team where scientists have a strong voice—validated by clear success metrics for involving scientists.

3. Are you willing to value unassuming insights?

It will be a long road for whomever is leading a lab's digital transformation. In the end, organizational abilities like deeper insights from prescriptive analytics are worth the investment—and will be necessary to keep up with a changing industry. Still, it is also important to prioritize less-flashy insights in the short term that support organizational change management to becoming a data and insights-centric organization.  

In an MIT Sloan Review article, "The Surprising Value of Obvious Insights", Adam Grant explores this phenomenon in the context of people analytics. After developing a set of basic analytics and reporting around performance data, Google found that some of the most "common sense" management moves—like meeting with a direct report their first day on the job or remembering to meet with reports monthly—had a large impact on performance. While these activities are common practice in most organizations today, it was the data and insights that validated and reaffirmed the best practices they were employing to the business problems Google was trying to solve.

People knew, in theory, that they should do these things. Yet sometimes we need help, Grant writes, "to close the knowing-doing gap. Common sense is rarely common practice." When Google began tracking basic best practices and letting managers know how they were doing, common sense best practices improved.

Common-sense insights can be equally important in the lab as well. One scientist shared that the insight they most wanted to unlock was, "which molecules won’t work—so we can stop wasting time with them.” This goal is common sense for a researcher, though it might not be for someone sitting outside of their team.

It is also a goal that can be addressed in multiple ways as digital systems mature. Advanced analytics will help labs fail faster and smarter. In the meantime, simple insights into human behavior—like the value of tracking and celebrating failures—can help along the way. 

Either way, success comes from ensuring that scientists are deeply involved in the digital transformation journey. Ultimately, if the goal is to bring insights to the bench. The most obvious insight of all is probably the most powerful: let scientists lead.