Understanding Sensor-based Lab and Storage Equipment Measurement Uncertainty
There are countless examples of how failing to protect stored products from environmental effects can compromise the integrity of a research laboratory’s work, making it nearly impossible for other labs to reproduce research findings.
Biomaterials that are readily damaged by poorly controlled temperatures include biologics such as blood or plasma, tissues, cell cultures, or organs. Incubators, refrigerators/ freezers, water baths, rooms—in all of these, monitoring of temperature primarily, as well as RH and CO2, is critical to research integrity. In many research laboratories, temperature monitoring is even essential for safety to monitor liquid nitrogen levels, ensuring that samples remain chilled and gaseous nitrogen is not escaping in the air and asphyxiating research staff.
If there is a breakdown in lab or storage equipment— for example, if the power goes out and a freezer starts to warm up—a temperature-monitoring system must be in place that alerts lab managers immediately so that problems can be addressed quickly to minimize potential damage. Research laboratories that fail to have such monitoring systems in place not only waste specimens that are rare, difficult to obtain, or prohibitively costly, but they also risk their lab’s reputation for research integrity when other laboratories find themselves unable to reproduce reported test findings. The ultimate cost is the blow to an organization’s reputation if a problem is not fixed in a timely fashion. This means that reliable environmental-monitoring systems are a must for nearly every research facility.
It follows that when a research experiment stipulates that the materials being studied or the research environment (chamber, incubator, etc.) be at a specified temperature, the question then becomes how accurate that temperature measurement is. The notion that a temperature can be exactly X is erroneous, in the strictest sense of accuracy. In reality, all measurements are subject to uncertainty, and a measured value is only complete if it is accompanied by a statement of the associated uncertainty.
What accuracy is and is not
Accuracy is established through calibration so that the entire measuring system—sensors and the instrument— traces back to a known standard. The standard that one uses to calibrate an instrument (a measuring system) should be two to four times more accurate than what you are trying to calibrate; best-in-class measuring instruments will generally trace back to standards with 4x accuracy. If you are calibrating a temperature-measuring device such as a data logger with a crude thermometer that has much lower accuracy, your calibration is meaningless.
Similarly, “NIST traceable” is meaningless without knowing the accuracy of the measuring system that was used for calibration. “NIST traceable” simply means something can be traced back to a national standard—it does not begin to suggest an accuracy level. Any calibration measurement can be shown to be NIST traceable if there is a succession of standards that originates with a national standard. Therefore, being NIST traceable does not mean the same thing as being accurate.
ISO 17025 and calibration accuracy
A decade or so ago, it was somewhat difficult for laboratory managers to be sure of the accuracy of the calibration services available for the various metrology instruments in their facilities. ISO 17025 quality standards should make this a non-issue for any laboratory manager who does his or her homework when acquiring instrumentation or calibration services.
Initially introduced by the International Standards Organization (ISO) in 1999, the ISO 17025 quality standard was specifically written for calibration facilities, going beyond the ISO 9000 quality standard to compel such laboratories to demonstrate competence (i.e., performance) by using documented quality management systems. The A2LA (American Association for Laboratory Accreditation), NVLAP (National Laboratory Voluntary Accreditation Program), Laboratory Accreditation Bureau, and similar accrediting bodies certify calibration facilities for ISO 17025 standards.
ISO 17025 accreditation certificates clearly state the calibrations that a calibration laboratory is certified as capable of performing and stipulates the “best uncertainty” for those calibrations.
“Best uncertainty” is the smallest uncertainty of measurement that a calibration laboratory can achieve within its scope of accreditation when performing more or less routine calibrations of nearly ideal measurement standards on nearly ideal measuring equipment. Best uncertainties represent expanded uncertainties expressed at approximately the 95 percent level of confidence, usually using a coverage factor of k = 2.
Instrument accuracy is not the same as “best uncertainty”
While it is useful to know the “best uncertainty” of a calibration service, it is also quite erroneous to equate measurement uncertainty with the accuracy of your measuring instrument.
For example, a calibration laboratory accredited to the standards for RH and temperature calibration accuracy could take a $10 dial thermometer that has an accuracy of +/- 1°C, and even though the laboratory has an accredited “best uncertainty” of +/- 0.02°C, it can never overcome the crudeness of the thermometer, which is 50 times less accurate than the lab’s calibration capabilities. In short, you cannot make a bad device better than it is. A crude dial thermometer, in fact, will often come with a so-called lifetime guarantee. That, too, is meaningless vis-à-vis accuracy.
Maintaining accuracy is of paramount importance
Broadly speaking, there are two main issues when it comes to measurement accuracy. First is the actual accuracy (and measurement uncertainty) of the measuring instrument used, as discussed above. But equally, if not more, important is how that accuracy is maintained over time.
Although it is truly baffling from the perspective of an engineering organization whose sole focus is developing accurate environmental-monitoring technology, many of the environmental-monitoring instruments used in research laboratories (and indeed, in highly regulated industries such as pharmaceutical) are sold and used without any statements from those instruments’ manufacturers as to what the instruments’ accuracy will be after some period of time. Frankly, if a measuring device is released for use in research laboratories without stated accuracies for a predefined time period, lab managers are unwittingly introducing a wild card factor that could readily undermine research integrity.
A2LA-certified data loggers are calibrated when they are released to market, and the measurement accuracy (and measurement uncertainty) is detailed in the A2LA certificate for that particular instrument, as shown in Figure 1. However, it is an immutable law of metrology that all sensors drift. Humidity sensors are especially prone to drift because they are “air breathers,” as they must be in direct contact with the environment. Not only is the air constantly changing temperature (which affects RH), but air also contains contaminants that affect sensors. Humidity sensors especially—but ALL sensors actually—have an ability to measure that degrades over time. The question then is how you manage and control this inevitable degradation.
A key difference between temperature data loggers that are released to research laboratories WITH stated accuracy at the time of next calibration (e.g., what the sensor will read one year later, or “as found” accuracy) is that they use highly stable components, including sensors and best-practice calibration methods. This is true no matter which sensors are used—thermistors, resistance temperature detectors, and especially thermocouples.
Ideally, and it does exist, manufacturers need to stipulate the accuracy of measuring instruments such as data loggers over a specified time period (usually by the time of recommended recalibration). This means there is historical knowledge of the measuring instruments’ characteristics when recalibrated.
The next time you look at your data, consider how much the values may have deviated from the original measurements. If the instrument manufacturer has not stated accuracy in between calibration intervals, then you are hoping the measured values are correct. Since it is a given that ALL sensors will drift over time to some degree, a sensor-based instrument released to the market whose behavior (drift) has not been characterized over time and cannot be stated is at best a presumed accuracy, not a studied and stipulated accuracy.
Don’t mistake a specification of initial accuracy for how the instrument will perform.
For more discussions of the factors underlying stability of humidity sensors, please see http://www.veriteq. com/download/whitepaper/catching-the-drift.htm. On differentiating stable vs. unstable temperature sensors, please see http://www.veriteq.com/validation/pay-offthermal- validation-data-logger.htm.