With the architectual basis for integration still undeveloped, users must focus on their own requirements.
“Integration” within lab automation is a bit like the word “free” in a store window: It gets your attention and you want to find out more; sometimes you’re glad you did and other times not. The reason it grabs our attention is that an integrated lab system can yield highly desirable benefits. It implies:
- Ease of use - integrated systems are expected to require less effort to get things done
- Improved productivity, streamlined operations - we expect the number of steps needed to accomplish a task to be reduced
- Less duplicate data - you shouldn’t have to look in multiple places to find what you need
- Fewer transcription errors - integration will result in electronic transfers that should be accurate, and this means there’s no need to manually enter and verify data transfers
- Improved workflow and movement of lab data - this will reduce the need for people to make connections between systems because integration facilitates workflow
- More cost-effective and efficient lab operations
Integration also brings us closer to another highly marketed goal: the paperless workplace. Integration is a necessary step toward that objective.
The examples we have of integrated software environments bear that out. Office productivity suites from Microsoft, Apple, Google and OpenOffice that combine word processing, spreadsheets, email, calendars and other functions are good examples. If I wanted to insert a chart right , the word processor would bring up the appropriate application, create the chart and insert it. If I wanted to edit that chart, I’d click on it and the application would open automatically. An invitation sent via email can be clicked on and added to my calendar with the option to accept or decline.
There are examples in hardware as well. Your laptop has USB and/or Firewire connections. Plug the cables in and things work. The telephone is another. Modular connectors and tone dialing standards allow fax machines, computers and point-of-sale components to work without a lot of effort. Networks with modular connectors or wireless components also make the assembly of integrated computer or entertainment systems less difficult. When we think about integration, the models we have in mind are things like those, or perhaps Lego blocks.
All of these examples, as well as others, have something in common: They were designed to work together, and there is an architectural basis for integration. There are rules for how things should work in an integrated environment and standards (hardware and software) that dictate how connections are made, how information and signals move, and what the operational priorities and rules are, etc. Integration doesn’t “happen,” and it isn’t just a matter of plugs that fit together. It is the result of a complete engineered system with all parts designed to work together. And that systems engineering is what we are missing in lab applications (both hardware and software).
The examples noted have something else in common: Their development and behavior are controlled by single vendors or organizations that carefully control the critical elements that keep things working. Working with a given vendor’s application environment offers a lot of flexibility. Working between vendor environments reduces that flexibility to data exchanges with reduced capability common-data formats (those warning messages telling you that saving in a .txt format will result in loss of access to features). An integration architecture includes the rules interacting, and components need to function in the form of standards and programming interfaces. An organization exists to manage those elements and alter them in a controlled manner. One reason the Internet works is because the World Wide Web Consortium (www.w3.org) manages the underlying architecture.
What does this mean for laboratory systems?
We are a long way from having the integrated environment we’d like to have, one where we can connect the outputs and inputs of various elements—regardless of who the vendors are—and have them function. That is a bit of a simplification, but if you boil it all down, that is what people are looking for; we’ve seen it work in other environments and want the same level of flexibility in our labs. We’re missing the overall architectural plan that enables integration between systems, regardless of what industry you work in. This isn’t a life science or petrochemical problem—this is a fundamental issue that cuts across all industries that engage in scientific and laboratory work.
Today, integration is a function of individual vendor efforts. Those efforts result in the following:
- Connections between specific sets of instruments and software packages that are supported by the vendor
- Facilities that a programmer can use to make connections
- Partnerships between vendors to link specific products or product sets
- Integration performed by third-party systems integrators and consultants
In the first case, the vendor provides support for certain models of instruments (balances, etc.) or software systems. The level of support is uneven. A balance may be electronically supported for data access and control (you tell the system when you are ready to make a measurement and the software takes the reading and enters it into its data structure; then it may tare the balance automatically). In other cases the interface is one-way; the instrument or software system sends a data stream to the vendor’s software, which extracts the information it needs. You may be faced with being able to use only equipment the vendor has supported or working with the vendor to support the devices you currently have.
Programming interfaces—facilities designed by the vendor to allow controlled program access to database elements— give you more control over how things happen, but you have to do the development and support it. If either of the systems you are connecting changes, the code you developed may have to change. This can be a serious problem if you go beyond the facilities provided by the vendor and make changes to the underlying code. Upgrades to the vendor’s products can compromise any modifications you’ve made— either to the connecting systems or the underlying components in database applications. This problem can exist even if the vendor has made the changes under contract with you (the contract should specify that programming will be supported or corrected if product upgrades cause it to fail).
Vendors will form partnerships to solve what they recognize as a mutual need. An instrument data system vendor may want to connect to another’s LIMS or electronic lab notebook (ELN). The combination would offer a market advantage to both. The point the customer needs to examine is whether this connection is designed to facilitate a marketing relationship or if it is the result of a serious engineering investment to provide a robust supportable interface. Will it be supported in the current and subsequent versions of each product, particularly since products are going to be on asynchronous development cycles? Marketing partnerships come and go; but once you’ve invested in the products, you have a system you have to live with—so ask the right questions.
Finally, there are a number of companies that will offer integration services. These can be an effective solution to specific requirements. However, the final responsibility lies with lab management: An integrator has to be chosen carefully to ensure that the vendor is reliable and will be in a position to support the work over successive generations of upgrades (upgrade processes tend to cause things to break if not well-engineered). This is an outsourcing exercise, and all the rules and issues apply.
What can be done to change the situation?
The problems we are looking at are the result of a lack of maturity in the marketplace. Vendors are building products based on their perception of need, and if the customer base doesn’t demand integration capability they won’t invest the engineering resources to provide it. Before that investment can happen the architectural basis for integration has to be developed, and you can’t ask the developers to provide a capability without defining the requirements and underlying structure to support it.
This problem has been faced by others and successfully addressed. Manufacturing industries have developed standards that permit integration. The same has been done in clinical chemistry. Clinical labs were faced with a fixed cost-per-test structure and variable costs. Increased efficiency and productivity were the means of addressing the situation, and those were enabled by the development of standards for integration and communications between lab systems under a program of Total Laboratory Automation—in other words, the architectural basis was planned.
The onus is on the customer base. This problem has been addressed in John Trigg’s Integrated Lab1 website and in a proposal on the website2 of the Institute for Laboratory Automation. The latter reference describes a proposal for studying the work done in clinical laboratory systems to see how it can be applied to other environments, with the possibility of capitalizing on 20 years of standards development work. We need to work as a multi-industry community to address the issue. There are considerable benefits to the development of an industry-wide lab integration architecture. It moves labs away from having to address the problems and limitations noted above and gives them the ability to choose among bestof- breed solutions. This should foster competition between vendors to produce better products and help new vendors develop offerings that meet lab needs.
How do you address the issue of lab automation integration in your lab today?
The bottom line is planning. Labs have to develop rational lab-wide plans for the specification, purchase and use of laboratory technologies. You have to purchase from the selection of products that exist, but you can make informed choices.
1. Take a look at your lab, and decide how you want it to operate. What is the desired work flow, where is data generated and where does it need to go? That includes instruments, data systems, LIMS, ELNs, etc. Draw a map showing the flow of data and information.
2. Specify products that facilitate that map. Make conformity to your ideal data flow part of the functional requirements.
3. Make provision for flexibility. The most expensive products are those that are going to store data for the long term, the ones that you will go to for answers about samples and experiments. They may not be the most expensive on a dollar basis, but that point of view may change when you consider the cost of changing systems, training people and transferring data from one environment to another. These are the centerpieces of your plan. Choose wisely.
4. Purchase products that generate data (balances, instruments, etc.) that are compatible with the data storage/ management systems.
5. Allow for change. When planning a project, particularly if software development is involved, design systems so that change is simple, and all instrument model-specific codes are localized in case replacement is necessary.
The ability to integrate laboratory systems isn’t where it needs to be. That can change. It has been developed successfully in other industries.