Lab Manager | Run Your Lab Like a Business

Article

Managing Quality

The lab manager's role in building a robust, reliable, analytical quality system

by Wayne Collins
Register for free to listen to this article
Listen with Speechify
0:00
5:00

It can be said that a laboratory’s reputation must be like Caesars’s wife—beyond reproach. One of the laboratory manager’s top priorities must be to ensure that testing quality is handled impeccably and honestly to safeguard this reputation. Responsibility for the quality of work coming out of the lab rests squarely on the lab manager’s shoulders—tasks can be delegated but, in the end, the buck stops with the manager. So how can a manager be sure that the results coming out of the lab are correct when business responsibilities leave so little time to oversee the science? The answer, of course, is to build systems to protect the quality and to monitor measures of system performance in order to react quickly to any indication of a failure.

Analytical quality is ultimately defined by the client, whether internal or external to the organization. The basic expectations that can reasonably be applied to any lab are listed in Figure 1. The manager has several options as to how to meet these expectations, but all systems have certain elements in common—a robust calibration program, well-defined methods, well-trained analysts, and so forth. The elements are typically defined within the framework of a quality assurance program that is meant to fulfill management responsibility for the quality of the lab’s outputs, assure analysts of the quality of their work, inform clients of the quality of data, inspire confidence in the lab’s results, provide documentation for present and future use, and protect the lab’s interests.

Get training in Lab Quality and earn CEUs.One of over 25 IACET-accredited courses in the Academy.
Lab Quality Course

Customer's Expectations for Laboratory Quality

Analytical Measurements should be made to satisfy an agreed requirement
 
Analytical Measurements should be made using methods and equipment which have been tested to ensure they are fit for purpose.
 
Staff making analytical measurements should be qualified and competent and able to demonstrate that they can perform the analysis properly
 
There should be a regular independent assessment of the technical performance of the laboratory
 
Analytical measurements made in one location should be consistent with those made elsewhere
 
Laboratories should have well defined quality control and quality assurance procedures
 
Laboratories should use validated methods

 Figure 1. Customer's Expectations for Laboratory Quality

The first step in establishing an analytical quality assurance program is so basic that it is sometimes overlooked by the laboratory—it is to simply define what quality means for the particular test. Quality is not a universal concept but is a relative determination based on the requirements of the end user of the results. Test quality always includes two aspects: qualitative identification beyond a reasonable doubt and numerical accuracy. But the specifics for each test are determined by the intended use of the result.

For each test, if sensitivity, consistency, and uncertainty are adequate compared to end-use requirements, then quality is acceptable; however, what is considered high quality in one situation could be unacceptable in another.

For example, a measurement at the parts-per-million level for a client who simply needs a result to the nearest percent represents a case in which testing quality is very high although perhaps not the best choice if it adds extra costs for the client. If the same parts-per-million measurement were made for a client that required parts-per-billion level results, then the testing quality would be considered low since it would not meet the client’s expectations.

Many factors determine the correctness and reliability of the tests and/or calibrations performed by a laboratory. These include contributions from human factors, accommodation and environmental conditions, test and calibration methods and method validation, equipment, measurement traceability, sampling, and the handling of test and calibration items. The extent to which the factors contribute to the total uncertainty of measurement differs considerably between types of tests and between types of calibrations.

The fundamental premise of analytical quality assurance is that measurement may be established as a process that may be brought to a state of statistical control with a characteristic precision and accuracy that can be assigned to the data output. The basic requirements for applying statistics are that the measurement system is stable, individual measurements are independent of one another, and individual measurements are random representatives of the general population of data. Unfortunately, it is nearly impossible to confirm that these conditions are met, so the solution is to look for evidence of nonconformance. This is typically done through a statistical process control scheme in which a well-characterized reference material is routinely analyzed over time, with the result plotted in a control chart. By analyzing the chart using the specified set of rules each time, a new result is entered, nonconforming results are easily identified, and corrective action can be taken if needed. Specific details of such a system have previously been described.1

Properly validated procedures are another essential element in managing the quality of laboratory testing. Nearly all labs have implemented a system of controlled written methods that specify exactly how each test is to be performed; these may be backed by a policy requiring all analysts to follow these procedures exactly (without any deviation). While this system might be sufficient to ensure consistency, it might still hide a weakness if the validation of the procedures was not performed properly.

Method validation is the process of verifying that a procedure is fit for its purpose, i.e., for solving a particular analytical problem; it includes verification that a method is suitable for its intended purpose, establishes performance characteristics and limitations, identifies influences that might change characteristics, and identifies the extent of changes from possible influences. Thus, the demonstration of scientific validity under a given set of circumstances that is the focus of most method development is a necessary but not sufficient condition—it must also be shown that the method is reliable and appropriate for all circumstances relevant to the particular purpose for which it was developed. Analysts charged with method development for their own internal use sometimes fail to maintain the rigor necessary to complete the full validation of the method due to time constraints. Since method development is typically included in an analyst’s performance objectives, lab managers can ensure that appropriate rigor is achieved by reviewing each element of the validation process with the analyst during periodic evaluation sessions.

A calibration program is a primary element of any laboratory quality plan. Calibration is defined as the process of establishing how the response of a measurement process varies with respect to the parameter being measured. The usual way to perform calibration is to subject known amounts of the parameter (e.g., using a measurement standard or reference material) to the measurement process and then to monitor the measurement response. The two major aims of calibration are to establish a mathematical function that describes the dependency of the system’s parameter (e.g., concentration) on the measured value and to gain statistical information for the analytical system (e.g., sensitivity, precision). Calibration methods typically define acceptable tolerances and give instructions on how to make the instrument adjustments, but they often neglect to describe how the data are to be treated or to define the rules for when an adjustment should be made. When a calibration standard is measured and the instrument is adjusted, it is rare that the measurement is centered on the exact value of the calibration standard.

Chasing exact agreement with the standard by continuing to adjust the instrument is an exercise in futility. Surprisingly, many analysts fail to grasp this concept and continue to “tweak” the instrument each time they measure the standard, even if the measurement is within the tolerance range. This actually increases the error, due to an effect known as “overcontrol”—as illustrated in Figure 2. The preferred way to manage calibrations is to institute the same type of statistical process control system previously described for monitoring test quality. The rules of the system then dictate when to make an adjustment to the instrument rather than relying on the analyst’s judgment. Figure 2. Illustration of increased variability in measurement due to overcontrolFigure 2. Illustration of increased variability in measurement due to overcontrol

By charting the calibration standard measurement, limits are based on actual instrument precision rather than on arbitrary tolerances, the graphical presentation reveals potential problems not easily discoverable by other techniques, and the defined rules for when to make an adjustment eliminate the excess error introduced by overcontrol. If the standard is measured perhaps 20 to 30 times while making no adjustments to the instrument, the data can be used to calculate the average measured value, which can then be compared with the specified value for the standard. The difference between these values is the bias, which should then be added or subtracted from every subsequent measurement; however, it is not uncommon for labs to introduce errors by failing to consider this calibration bias correction when reporting results. Ideally, the bias correction should be applied to measurements of the calibration standard prior to charting as well as in reporting sample results.

The next element in managing test quality is participation in an organized proficiency testing program to demonstrate that measurements made in the lab are in agreement with measurements made by the majority of labs performing the same test. While labs can collect statistical data to determine the precision of their tests, proficiency testing adds the extra dimension of accuracy to help detect and repair any unacceptably large inaccuracy in their reported results. The process consists of many labs measuring samples drawn from the same population; using the identical method; and reporting results to the organizer, who evaluates the data using statistical tests.

Most schemes convert the participant’s result into a “z-score” that reflects two separate features— the actual accuracy achieved (i.e., the difference between the participant’s result and the accepted true value) and the organizer’s judgment of what degree of accuracy is fit for purpose. While proficiency testing serves a vital purpose within the lab’s quality management program, its limitations must also be recognized. It cannot be used as a substitute for routine internal quality control, is not a means of training individual analysts or a way of validating analytical methods, and does not provide any diagnostics to help solve testing problems, and success in a proficiency test for one analyte does not indicate that a laboratory is equally competent in determining an unrelated analyte.

The product of laboratories is data—measurements that can be applied in some manner to solve a problem, build a new product, control a process, or otherwise contribute value toward the objectives of the client. The unimpeachable integrity of these data increases the lab’s value by allowing the client to proceed toward its objectives with confidence while enhancing the lab’s stature. The lab manager’s time in building a robust, reliable analytical quality system is well spent.