The process of calibrating and subsequently verifying the calibration of turbidimeters for low-level measurement is very sensitive to user technique and the surrounding environment. As measured turbidity levels drop below 1.0 NTU, interferences caused by bubbles, particulate contamination, and stray light become major factors. To minimize these errors, many approaches can be taken. Two of the most common approaches for low-level calibration are one-point and two-point calibrations.
After identifying and addressing turbidity interferences, the calibration approach that is best suited for a specific low-level measurement can be identified. This article discusses possible errors and the resulting calibration approach Hach uses for its turbidimeters.
Interferences in Low Turbidity Measurement
Hach Company’s approach to turbidimeter calibration has been very consistent over several generations of turbidimeters. In turbidity science it is well known that the correlation between turbidity and nephelometric light scatter (light scattered by particles at an angle of 90 degrees to the incident light source) is highly linear over the range of 0.012 to 40 NTU. Because of this relationship, an approach that reduces calibration error while extending measurement accuracy to the lowest turbidity levels can be taken.
At very low turbidity levels, measurement errors become very significant. The major source of error is stray light. Stray light is defined as any light reaching the light scatter (90 degree) detector that is not scattered by the sample. Stray light is always a positive interference. Sources of stray light include dust contamination in the instrument optics, low quality or damaged sample cells and instrument electronics. The best way to reduce stray light is through the design of robust instruments with high quality optics combined with good laboratory and measurement technique. Hach leads the market in the production of instruments with extremely low stray light levels.
Another significant error can be attributed to bubbles in the sample. Bubbles in most samples are easily eliminated by allowing the sample to stand for a short time period (2-5 minutes) before measurement. For process instruments, bubble removal is best accomplished through the use of bubble traps. At low turbidity levels, a wait of 2–5 minutes will not result in particulate settling.
The third error source that must be considered is contributed through the preparation of the calibration standards. Calibration standards can be easily and accurately prepared at higher values (greater than 1 NTU) if close attention is paid to cleanliness, dilution water of high quality (low turbidity) is available, and good analytical technique is applied. By addressing these issues, an analyst can prepare standards to within 2 percent at turbidity levels down to 10 NTU and to within 3 percent at turbidity levels between 10 NTU and 1 NTU. Calibration standards below 1.0 NTU can be prepared if dilution water of the highest quality (ultra-filtered or reverse-osmosis) is used to prepare all standards and to clean sample chambers, cells, and glassware. Excellent laboratory technique is necessary to reduce any source of contamination. Unfortunately, high purity water is often difficult to obtain and even when all cleanliness issues are addressed, a 0.30 NTU standard typically has a 10 percent (best-case) error.
Hach designs its instruments for higher-level calibration because the accuracy of a prepared 20 NTU standard is typically better than 2 percent. Preparation of standards below 1 NTU will result in much greater error mostly due to residual turbidity in the dilution water. Filtration will greatly lower this interference. If possible, use either distilled or deionized water that has been passed through an ultra-filtration process such as reverse-osmosis.
The One-point Calibration Algorithm
The calibration of Hach turbidimeters was designed to reduce or eliminate the interferences presented previously. The calibration algorithm is very simple and is based on two readings; a calibration standard and a zero point, which requires no standard. The basis for this algorithm approach is discussed below.
• The nephelometric detector response to turbidity is highly linear in the range of 0.012 to 40 NTU. This linearity allows a calibration to be performed using a single standard at any point in this range. For Hach turbidimeters, the 20 NTU calibration point is preferred due to ease of preparation and high accuracy. Even if the dilution water has some turbidity (less than 0.5 NTU), the error contributed to the standard is low. In addition, errors due to instrument stray light are negligible at 20 NTU. Since a calibration performed at 20 NTU has a very low error, theoretically, measurement error will also be low when a turbidimeter is calibrated at this level. If the sum of the errors between the standard and the instrument equal 2 percent at 20 NTU, the same accuracy can be extrapolated to low-level measurements.
• The zero point is performed with the instrument light source turned off, but with all other measurement parameters in place. During this measurement, any residual light in the optics or environment that could result in interference is effectively removed from all subsequent measurements. This is often referred to as the blank or as dark values. Typically the zero point is determined when the instrument power is cycled.
• After the zero measurement and standard measurement have been taken, the instrument algorithm draws a straight line between these two points, constructing the linear calibration curve. The only remaining error is from stray light contributed by the instrument lamp. This error becomes a factor at the lowest measurements, typically below 0.05 NTU.
• Most Hach turbidimeters require the measurement of the dilution water (DW) so the turbidity contribution of the dilution water used to prepare the high-end calibration standard(s) can be compensated. Once measured, the value of the DW standard is stored in the software of the turbidimeter. The value is then subtracted from the measured value of the other calibration standards. By subtracting this value from the measured value of the calibration standards, the absolute value of each calibration standard is calculated and the accuracy of the high-end calibration points is maximized.
The use of the one-point calibration algorithm deals with all the interferences noted above except for stray light observed when the instrument light source is on. This interference is mediated by designing the instrument to produce high collimated light and eliminate light exposure to all surfaces inside the instrument optics except the sample cell. This is exactly what Hach does. With high quality components coupled with strict design standards, Hach turbidimeters are able to bring incident lamp stray light to very low levels (<0.025 NTU). It is only at the lowest turbidity levels that a minute portion of stray light may be observed and it is always a positive interference.
Using a high-level standard for calibration adequately addresses the interferences that would result from the use of a low-level standard. The accuracy of the instrument remains consistent with the value of the standard through the range of linearity (0.012 to 40 NTU). This means that if a 2 percent error is made at 20 NTU, the measurement error will be 2 percent anywhere in the range of linearity. However, if a low-level standard was used, and the accuracy was 10 percent (which is typical), the accuracy of the measurement over the range of linearity would also be 10 percent. The use of a high-level standard that can be prepared accurately will result in the highest accuracy for low-level measurements.
The Two-point Calibration Algorithm
The two-point calibration algorithm is a common approach to calibration of turbidimeters. This method uses two calibration standards that are typically set at the extreme ends of the measurement range. Both standards must be defined, one with a high turbidity value and one with a low (non-zero) turbidity value. The algorithm draws a straight line through the two calibration points and then extrapolates the resultant slope of this line over the entire linear range of 0.012 to 40 NTU.
In theory, this approach should address the low-level interferences mentioned above and compensate for stray light, regardless of the level. The need for extremely high quality instrument components and design are less important because stray light is accounted for in the calibration. Calibration standard preparation is the key to accuracy because the errors introduced with poorly prepared standards can be substantial. The difficulty in preparing and properly measuring the low-level calibration standard is the major concern with the two-point calibration method. As was mentioned in the one-point calibration, it is nearly impossible to prepare a low turbidity standard accurately because of the issues discussed earlier.
Hach turbidimeters do not typically use the two-point calibration method because of the high risk of creating false negative measurement error. False negative error means that a measurement reading is lower than the true value of the sample. In turbidity measurement, a false negative could leave water treatment plant personnel thinking they are producing low turbidity water, when in reality, the turbidity value of the water they are producing is higher than is being reported. It is very easy to create a false negative condition if a calibration standard value is actually higher than its reported value. When calibrating with a low-level standard at the low end of the two-point calibration, the preparation error is usually positive (i.e, the actual value of the standard is higher than the reported value). This positive error can be attributed to not accounting for the turbidity of the dilution water used in the standard preparation. When such a standard is used for calibration, the resulting measurement condition typically has a false negative bias, creating the false negative measurement condition. Many instrument manufacturers attempt to use lower quality components as a cost-saving measure. Doing so increases the importance of accurately prepared standards that account for all turbidity introduced during preparation.
The Use of StablCal® for Calibrations
As mentioned above, when formazin standards are prepared, the turbidity of the standard is the sum of both the turbidity of the dilution water and the turbidity associated with the formazin polymer. Since the turbidity of the dilution water may not be known, it must be measured on the turbidimeter so it can be then subtracted from the net turbidity of the prepared standard. Performing this subtraction of the dilution water turbidity will then yield a highly accurate standard that is used in the calibration.
However, when StablCal® standards are prepared at the factory, the dilution water value is accounted for and the standard is adjusted to the theoretical value prior to packaging. Thus, there is no need for the instrument to measure the value of the dilution water when StablCal® standards are being used for calibration. Newer Hach instrumentation has modified calibration curves that do not require the measurement of the dilution water during the calibration. These include any process instruments that are on the SC platform and the 2100Q and 2100Qis turbidimeters. These instruments have eliminated the steps associated with this measurement when a calibration curve that references StablCal is selected.
StablCal calibration kits for Hach laboratory turbidimeter models 2100AN, 2100AN IS, 2100N, 2100N IS and the 2100P and 2100P IS portable turbidimeters have been modified to accommodate the existing algorithms of these models of turbidimeters. These instruments have existing calibration software that still requires the measurement of the dilution water as part of the calibration procedure. To accommodate this step, these kits include a <0.1 NTU stablcal solution that is to be used in the dilution water measurement step. The <0.1 introduces a very small error to the calibration curve that is less than 0.2%, which still allows for the instrument to perform within its published specifications.
The best calibration approach should minimize all errors that contribute to low-level turbidity measurement. The prominent instrument error at low levels is stray light and is best minimized though instrument design. Other errors associated with calibration standards can be reduced through the use of accurately prepared higher-level standards. Hach turbidimeters successfully use the one-point calibration method by combining high-quality instrumentation with highly accurate standards. Interferences are minimized and low-level measurements can be made accurately and consistently for optimal turbidity measurement. Regardless of which calibration method is used, the instrument’s performance must be verified after calibration using an independent verification standard. The best source of verification standards must be accurately assayed with error tolerances defined. Examples of available standards that meet these criteria include StablCal® Stabilized Formazin Certified Standards and dry verification standards also with defined accuracy tolerances. It is through verification that the end-user can obtain confidence that their instruments are optimized for performance.