The concept of laboratory benchmarking is easy to grasp. It simply entails comparing the quality of one lab to measures taken from a range of other laboratories or from the standard bearers—the top performers in the field. The details involved in the benchmarking process, however, can be quite maddening.
For most laboratories, “Benchmarking is always an issue, and it is not as simple as it sounds,” says Niek Klooster, senior global consultant to laboratory management and benchmarking with Intertek (Analytical and OCA Divisions). Part of the reason is that labs have quite different backgrounds, functions and objectives, and these differences make it difficult to compare them in an objective, like-to-like manner.
Paul Mathew, staff scientist with the Lawrence Berkeley National Laboratory, who specializes in energy benchmarking in laboratories, concurs. “In principle, benchmarking is simple. It starts with a selected metric such as total annual energy use per square foot per year, which can be simply and inexpensively calculated and compared for a number of laboratory buildings.” The goal generally is to identify opportunities for improvements in efficiency.
“Where it gets complicated is in the details,” Mathew continues. “Each lab building is unique, so how to normalize for the differences associated with each setting— climate, installed equipment and other programmatic elements—becomes a key question.” Each of these attributes influences the outcomes in benchmarking, so controlling for different variables is always a major hurdle, according to Mathew.
Still, Klooster says, “Lab managers should not be afraid of benchmarking. I see fear at the beginning of each project with laboratory directors. They are afraid of having the discussion because they believe the benchmarking results will cause everything in their labs to change—and they don’t want that.”
Typically, laboratory benchmarking involves a short study, which could take as long as a week, during which there are discussions about the organization, its quality, information technology (IT) and maintenance, among other subjects. “After collecting a variety of data on the number of tests and methods, and on different protocols, a report is generated for the management of the company,” says Klooster.
“Intertek’s benchmarking staff would typically prepare such reports for laboratory directors in the base chemical, petro chemical, refinery and related industries, where the same tests are done on a regular— daily or weekly—basis. We get most requests for benchmarking work from site management teams; they decide when to request our services to help them work out their efficiency and benchmarking issues.”
Among the top reasons for benchmarking laboratories are to improve quality; to satisfy payer, regulatory and accreditation requirements; and to enhance the competitive position of companies. These are perennial goals, according to Klooster, who is based in the Netherlands, and who has also served as the chairman of the Dutch Laboratory Managers Association.
He says that laboratories, especially in the bigger companies, are always under pressure to improve their quality, excellence and competitiveness, and that this is the main driver behind the growing need to do benchmarking. Today, Intertek operates 1,000 labs in 100 countries, and employs about 26,000 people worldwide.
Turning to the mechanics of how benchmarking projects are initiated, Klooster says that to support the contention that their labs are performing at 100 percent efficiency and cannot be any more productive, lab directors perform benchmarking exercises, often with the help of external specialists, to prove their case to top management.
Related to that approach are the situations in which top management officials indicate to their lab directors that their facilities could be more efficient than they are currently. To support their position, they will request benchmarking, which in turn could provide specifics on the extent of current deficiencies and pinpoint areas of weakness.
“Third, and for an entirely different purpose, which we at Intertek call outsourcing, is another important reason for doing benchmarking,” says Klooster. In such situations, Intertek will take over an entire laboratory, including the staff, all the assets, and the rent or lease for the site and facilities. In such situations, the laboratory is transitioned from an in-house operation to an outsourced one. This could be advantageous for the original owners because they are no longer responsible for any of the fixed costs.
“Intertek performs a lot of outsourcing, and the company is continuously in discussions for additional outsourced labs. In all these situations, we perform benchmarking of the laboratories to ascertain their levels of efficiency and the reasons behind any deficiencies,” says Klooster.
As practiced today, laboratory benchmarking is not an ongoing exercise. Rather, it generally takes the form of a site study that culminates in a report that, depending on the circumstances, could point out that the laboratory is not making the most efficient use of its resources because tests take longer than the normal time required for the method being used. “In such circumstances, the report to the lab’s management will recommend thatsteps could be taken to improve efficiency—and specific recommendations for improvement are then transmitted down to individual managers,” says Klooster.
“This is always a one-time exercise, and it is hardly ever the case that complete benchmarking exercises are done every year or every other year,” he says.
According to Mathew, there are two broad categories of benchmarking. One form, cross-sectional benchmarking, essentially compares one laboratory and its facilities with other labs and the buildings in which they are located. Longitudinal benchmarking, on the other hand, requires the study of one lab facility over time. In the case of energy, this means comparing usage over a number of years and selecting one as the baseline year.
Turning to the approach used by Intertek to conduct benchmarking, Klooster says the first step is to obtain a variety of information about different aspects of a laboratory, including its financials, a menu of the analyses it performs and details of its IT system, among other information.
“Then there is a series of meetings with site managers on a selected number of areas of concern such as work flow, number of samples handled per unit time and related questions,” says Klooster.
“On the basis of all that information, we organize site visits during which we interview the financial team, the technical staff, the IT team and the maintenance team, among others, depending on the areas of concern we identified,” he says. During these visits, the consulting team also ensures that its members interview some of the most important participants in this process—customers and other key stakeholders such as vendors and suppliers.
“All the different views and positions are then coordinated and compiled into the benchmark report and given to the appropriate decision makers,” he says. “What actions are taken after the benchmarking is conducted depends on the laboratory management that reviews the report.” He notes that a well-prepared report should have some commentary on best practices collected from labs at other similar and different industries, and provide some recommendations on how to streamline organizational roles and improve efficiency. There should be enough information to provide management with a basis to take appropriate action, according to Klooster.
“Still, this is not obligatory, and just represents recommendations on which site management can base their decisions,” Klooster adds.
Mathew agrees. “Benchmarking is really a first step in helping to identify the opportunities. It is not the last step; it helps to point in the right direction and to figure out which areas need to be improved.”
Eight years ago Mathew and his colleagues launched the Labs 21 Energy Benchmarking Tool, based on the recognition that there was no publicly available data on laboratory buildings for benchmarking. At the request of several labs, they started collecting relevant data, and today have more than 200 laboratories in their core data set.
“Today, we can filter the data set for certain key characteristics because the tool has six different filters—lab area ratio (lab area relative to the building area),occupancy, lab type, lab use, climate zone and data type (such as real versus estimated data),” he says.
This is still the only publicly available tool for laboratory energy benchmarking in the United States. “The idea behind the tool is not to set standards but to create a basis for comparisons with other labs. The idea is to let the benchmarking data speak for themselves,” Mathew says.
“After labs discover, for example, that there is scope for improvement, they may go on to benchmark individual systems such as lighting, ventilation, heating and cooling to discern the areas that are deficient and where action is needed.”
Mathew says that many pharmaceutical companies focus sharply on benchmarking. “They are very keen on managing costs and increasing profits, and one of the ways to manage costs is to reduce utilities. So anything that helps them get a grasp on how they are faring in the energy sector is of value to them.
“Interestingly enough, the top five or six big pharmaceutical companies have gotten together and done benchmarking studies on their laboratories—this sends a strong signal that there is growing interest in energy benchmarking,” he adds.
In fact, another major sector in the healthcare field, the clinical laboratories, initiated quality benchmarking measures more than 60 years ago, when about a dozen laboratories in Philadelphia cooperated on a plan to compare their results for hemoglobin testing, which were widely discrepant at the time. Over the years, a number of accreditation standards have emerged for clinical labs, including the Clinical Laboratory Improvement Amendments of 1988 and some of the rules developed as part of the Joint Commission’s patient safety and quality improvement goals.
Meanwhile, a number of city and state agencies are making performance according to benchmarked standards mandatory. LEED, an environmental certification system for buildings, which espouses performance improvements across metrics such as energy savings, efficient water use, carbon dioxide emissions and better use of resources, is one of the solid drivers of benchmarking. Furthermore, a number of state and federal laws are making such efforts mandatory.
For a variety of reasons, Intertek’s Klooster foresees an increase in benchmarking activity, and he notes that in 2009 the requests his company received for benchmarking from labs increased. Globally, he sees quite different patterns in how benchmarking will be approached by U.S., Canadian and European labs compared to those that are located in the rapidly developing Asian countries (the tigers).
“In the Western developed nations, the benchmarking issue will be about efficiency, and the big financial question will be whether to keep the practices of the last 50 years. These will be the key questions for some time, especially now that their economies are in a tough position,” he says.
“The Asian tigers are quite different because they’ve passed all the stages in the economic development cycle much faster than we have. They have different interests—their focus is to grow, grow and grow.”