Lab Manager | Run Your Lab Like a Business

Integrated Tools Harmonize Disparate Datasets

Consolidation, collaboration and configurability seem to be the three C’s driving the need for more tightly integrated lab workflows and systems, especially in the life science market. The challenge now is integrating data coming from many different sources and to decipher patterns that lead to insight and innovation.

Register for free to listen to this article
Listen with Speechify
0:00
5:00

Successful Adoption of Integrated Research Management Systems in the Life Science Market

Consolidation, collaboration and configurability seem to be the three C’s driving the need for more tightly integrated lab workflows and systems, especially in the life science market. Increased consolidation in the pharmaceutical and biotechnology industries has resulted in organizations with many different research management tools and silos of data that make it challenging (if not impossible) to effectively collaborate across the enterprise. Similarly, the need to communicate across multiple sites and different organizations has intensified as pharma and biotech companies are now outsourcing more to specialized vendors to reduce costs and improve results. “We are seeing increased collaboration not just within an organization but between organizations,” says Mark Everding, managing partner for LabAnswer. Various groups working from different locations are now expected to share more information and leverage resources and expertise on a routine basis.

With the increased use of sophisticated genomics and proteomics tools, there is certainly no dearth of information. The challenge now is being able to integrate, visualize and interpret data coming from multiple sources, and to decipher patterns that lead to insight and innovation. “Companies want a broader view into their data in order to make good decisions,” says Everding. This requires effective handling of massive amounts of diverse datasets and making it accessible in a fast, convenient format, preferably using a common interface. But the question is whether this data integration can lead to better data interpretation, thereby enabling better decision-making.

To integrate or not to integrate

According to Everding, system integration within labs is inevitable. “There is no doubt that systems in life science research are going to be integrated. However, are we going to have one system, three systems, or ten systems? How many systems are we going to have to integrate together? Are we going to have a fully integrated single system or are we going to use best-in-breed?” Getting answers to these questions is not quite that simple. It calls for a careful scrutiny of the organization’s legacy systems, their use of terminology and ontologies, their need for data model flexibility, and ultimately an examination of the strategic and operational needs of the enterprise, down to individual user.

Every industry seems to start with the best-in-breed approach in selecting IT systems. Organizations identify their needs in specific operational or functional areas and then pick the best products available in the market to fulfill those needs. Only later do they try to integrate them to work together. Inevitably, this approach falls short of the organization’s productivity expectations. “It hasn’t worked in any other industry and it doesn’t seem to work in life sciences,” says Gary Kennedy, founder and chief executive officer at RemedyMD. “That is not because any of the systems were bad, or that people made bad choices, but because they spent all their time integrating and none of their time realizing productivity benefits.” Hence, individuals and companies have started gravitating towards integrated product offerings. “Even if each of the modules isn’t necessarily the best in the industry, people don’t have to spend time on integration and instead can spend their time solving the problem at hand,” says Kennedy.

The obvious downside to selecting an integrated approach over a best-in-breed approach is that you don’t always get the best product features available in every niche of the marketplace. For example, a standalone electronic lab notebook (ELN) may have more features than the ELN functionality that’s part of an integrated package. A more fundamental and increasingly more serious drawback in trying to integrate best-in-breed systems is that you can never fully integrate the data from different systems. This limits the types of queries you can run and ultimately the types of hypotheses that you can test. (See sidebar below for more details on advantages of integrated systems.)

In essence, the trade-off organizations face is balancing the enterprise’s productivity needs against the individual’s preference for particular features or functionality. With a single integrated system, organizations typically sacrifice some of the requirements of individual users, unless the system is highly configurable so that the organization or its users can adapt it to meet their specific needs. “What it comes down to with the best-in-breed systems versus a single integrated system, is balancing the needs of the organization against the needs of the individual,” says Everding. “If the tool is flexible enough, you can configure it such that it meets individual needs within a job category and still have a common enterprise database. Now you’ve got the best of both worlds.”

Ontology-driven data harmonization

That desire to meet the needs of both the organization and its scientists is what drove RemedyMD to develop the Investigate™ Integrated Research Management System. “The only way we felt we could get data together was to build integrated systems from the outset,” says Kennedy. “We basically made a bet on this happening four or five years ago and so we started building all these integrated applications on the Mosaic Platform.” The Mosaic Platform ™ offers a common architecture for a laboratory information management system (LIMS), an ELN, a biospecimen management system, a reporting engine, and a data visualization system. “The reason we call it Mosaic is that we’re able to aggregate different pieces of data from disparate sources, like you would in a mosaic, and then see a pattern emerge. The pattern could be a variety of things, but it’s impossible to recognize the pattern until you aggregate the data from a variety of sources and harmonize it via an ontology,” says Kennedy.

These integrated tools share a common user experience and underlying data model. “Then we built an ontology to help harmonize and link disparate data into a single format that can be queried,” says Kennedy. The reporting system, the electronic data capture forms, the data visualization tools and other parts of the system were now under the control of the same master ontology, which eliminated the need to push data back and forth between applications and prevented spurious data from getting into the system. “When you visualize data, you know that it’s harmonized with other data elements in the system and that was the hardest thing for us to build,” says Kennedy. “One of the most difficult things we have built is the ability to extend the ontology so you can add new concepts on the fly. This means that your ontology never gets out of sync with your data model as you encounter new concepts or terminology in your research.”

Selecting the right system

Given the diversity in size and type of information being generated, the key question is how do you decide which data management tool is right for you. It’s not as simple as picking up a third-party report and finding who has the best software. “The first thing we have to do is understand the high level requirements of the organization,” says Everding. The enterprise has requirements for security, for collaboration with other organizations and for reducing overall costs. The individual researcher has a completely different set of requirements. “In today’s world, there are a lot of different tools that are very specific to what a researcher wants to do. So the organization has to balance the need for a niche tool against the needs of the organization and that balance is incredibly important.”

After identifying those needs, you have to assess the environment where your data originates. How is it generated? Is it clean and can it be integrated? Once you understand the environment and the business and individual requirements, then it’s time to start discussions with prospective vendors. “When there is a disciplined approach to selecting what tools you’re going to use, it pays huge dividends,” says Everding.

As part of the selection process, it’s important to build use cases and ask the software vendors to demonstrate to a particular use case. “Rather than saying, ‘come demonstrate your software to us’, we first describe our environment and our requirements and then let the vendors demonstrate to those,” says Everding. “Otherwise, you get a marketing pitch and a little bit of smoke and mirrors.”

SAIC-Frederick Inc. and RemedyMD recently entered into a collaboration in which researchers at the National Cancer Institute’s (NCI) Advanced Technology Program in Frederick, Maryland will implement RemedyMD’s Investigate Integrated Research Management System to help store, query, analyze and report data generated from their research in cancer and AIDS. “We needed something that can handle diverse workflows and also something that would be able to manage and interface with the tsunami of -omic data that is coming our way,” says Bruce Crise, Ph.D., director, business development, scientific and technical operations with the Advanced Technology Partnerships (ATP) Initiative at SAICFrederick Inc. “We were looking for an ontology-based system that would be capable of asking integrated questions across the different areas of expertise that the ATP is involved in.”

SAIC-Frederick expects RemedyMD’s Investigate™ application built on its Mosaic Platform to improve workflows in the nine laboratories of the NCI Advanced Technology Program and in NCI’s Patient Characterization Center by offering researchers access to a fully web-enabled solution for biospecimen management, instrument integration, data visualization, and project review and approvals. “For us, our selection criteria centered around (system) scalability, the enablement of the end-user to build workflows, and accessibility,” says Crise. In essence, SAIC-Frederick has decided to integrate the research systems across the NCI Frederick laboratories to achieve its organizational productivity goals while relying on Investigate’s highly configurable platform to empower its researchers to customize the application to suit their specific needs.

In the two months since the partnership was announced, Crise believes they now have a very active and engaged set of end-users. “I believe that what we’ll build will be more streamlined and more user-friendly and that will well make up for any developmental hurdle or cost that might be involved.” The other upside is that the people who are building it will actually be the people who are using it. “They’re ultimately vested and responsible for the outcome,” says Crise.

The integration of a system into the workplace and its adoption by the end-user is the ultimate yardstick for success. “If we build a system on time and on budget, and with the functionality that you asked for, do we have a success?” asks Everding. “The answer is maybe. But if no one uses the system, it’s a failure. So you have to have a system that people will use.”

Next generation of integrated systems

“I think the future is moving inexorably toward pattern recognition and toward self-monitoring systems,” says Kennedy. “So one thing that we don’t do, and I don’t think anybody does, is to have the system recognize patterns and intervene to make some changes. So they ought to be self-monitoring and self-adapting.” But for that to happen everything has to be integrated, has to be flexible and it must be ontology-driven. “If you want the system to make those adjustments, the only place you can make it would be in the ontology because that would force the adjustments throughout your integrated system,” says Kennedy.

Although they do have all the building blocks required for the next generation of applications, it’s going to be at least five to ten

years before there’s massive adoption of self-learning systems. “Right now, if people can adopt something that’s integrated, has an ontology, and is able to recognize patterns, even if the pattern recognition has to be manual, that’s a great step forward.”