2.
Measurement Process Characterization
2.3. Calibration 2.3.3. What are calibration designs? 2.3.3.3. Uncertainties of calibrated values
|
|||
Change over time | Type A evaluations for calibration processes must take into account changes in the measurement process that occur over time. | ||
Historically, uncertainties considered only instrument imprecision | Historically, computations of uncertainties for calibrated values have treated the precision of the comparator instrument as the primary source of random uncertainty in the result. However, as the precision of instrumentation has improved, effects of other sources of variability have begun to show themselves in measurement processes. This is not universally true, but for many processes, instrument imprecision (short-term variability) cannot explain all the variation in the process. | ||
Effects of environmental changes | Effects of humidity, temperature, and other environmental conditions which cannot be closely controlled or corrected must be considered. These tend to exhibit themselves over time, say, as between-day effects. The discussion of between-day (level-2) effects relating to gauge studies carries over to the calibration setting, but the computations are not as straightforward. | ||
Assumptions which are specific to this section |
The computations in this section depend on
specific assumptions:
| ||
These assumptions have proved useful but may need to be expanded in the future | These assumptions have proved useful for characterizing high precision measurement processes, but more complicated models may eventually be needed which take the relative magnitudes of the test items into account. For example, in mass calibration, a 100 g weight can be compared with a summation of 50g, 30g and 20 g weights in a single measurement. A sophisticated model might consider the size of the effect as relative to the nominal masses or volumes. | ||
Example of the two models for a design for calibrating test item using one reference standard |
To contrast the simple model with the more
complicated model, a measurement of the difference between X,
the test item, with unknown and yet to be determined value, X*,
and a reference standard, R, with known value, R*, and
the reverse measurement are shown below.
Model (1) takes into account only instrument imprecision so that: (1) \begin{eqnarray} Y_1 = X - R + error_1 \\ Y_2 = R - X + error_2 \end{eqnarray} with the error terms random errors that come from the imprecision of the measuring instrument. Model (2) allows for both instrument imprecision and level-2 effects such that: (2) \begin{eqnarray} Y_1 = (X + \Delta_X) - (R + \Delta_R) + error_1 \\ Y_2 = (R + \Delta_R) - (X + \Delta_X) + error_2 \end{eqnarray} where the delta terms explain small changes in the values of the artifacts that occur over time. For both models, the value of the test item is estimated as $$ \widehat{Test} = X^* = \frac{1}{2} (Y_1 - Y_2) + R^* $$ |
||
Standard deviations from both models | For model (l), the standard deviation of the test item is $$ {\large s}_{test} = \frac{{\large s}_1}{\sqrt{2}} \, .$$ For model (2), the standard deviation of the test item is $$ {\large s}_{test} = \sqrt{\frac{{\large s}_1^2}{2} + \frac{{\large s}_2^2}{2}} \, . $$ | ||
Note on relative contributions of both components to uncertainty | In both cases, \( {\large s}_1 \) is the repeatability standard deviation that describes the precision of the instrument and \( {\large s}_2 \) is the level-2 standard deviation that describes day-to-day changes. One thing to notice in the standard deviation for the test item is the contribution of \( {\large s}_2 \) relative to the total uncertainty. If \( {\large s}_2 \) is large relative to \( {\large s}_1 \), or dominates, the uncertainty will not be appreciably reduced by adding measurements to the calibration design. |