 2. Measurement Process Characterization
2.4. Gauge R & R studies

## Quantifying uncertainties from a gauge study

Gauge studies can be used as the basis for uncertainty assessment One reason for conducting a gauge study is to quantify uncertainties in the measurement process that would be difficult to quantify under conditions of actual measurement.

This is a reasonable approach to take if the results are truly representative of the measurement process in its working environment. Consideration should be given to all sources of error, particularly those sources of error which do not exhibit themselves in the short-term run.

Potential problem with this approach The potential problem with this approach is that the calculation of uncertainty depends totally on the gauge study. If the measurement process changes its characteristics over time, the standard deviation from the gauge study will not be the correct standard deviation for the uncertainty analysis. One way to try to avoid such a problem is to carry out a gauge study both before and after the measurements that are being characterized for uncertainty. The 'before' and 'after' results should indicate whether or not the measurement process changed in the interim.
Uncertainty analysis requires information about the specific measurement The computation of uncertainty depends on the particular measurement that is of interest. The gauge study gathers the data and estimates standard deviations for sources that contribute to the uncertainty of the measurement result. However, specific formulas are needed to relate these standard deviations to the standard deviation of a measurement result.
General guidance The following sections outline the general approach to uncertainty analysis and give methods for combining the standard deviations into a final uncertainty:
Type A evaluations of random error Data collection methods and analyses of random sources of uncertainty are given for the following:
Biases - Rule of thumb The approach for biases is to estimate the maximum bias from a gauge study and compute a standard uncertainty from the maximum bias assuming a suitable distribution. The formulas shown below assume a uniform distribution for each bias.
Determining resolution If the resolution of the gauge is $$\delta$$, the standard uncertainty for resolution is $${\large s}_{resolutioni} = \delta / \sqrt{3}$$
Determining non-linearity If the maximum departure from linearity for the gauge has been determined from a gauge study, and it is reasonable to assume that the gauge is equally likely to be engaged at any point within the range tested, the standard uncertainty for linearity is $${\large s}_{linearity} = \mbox{Max} \left| Y_{observed} - Y_{fitted} \right| \, / \sqrt{3}$$
Hysteresis Hysteresis, as a performance specification, is defined (NCSL RP-12) as the maximum difference between the upscale and downscale readings on the same artifact during a full range traverse in each direction. The standard uncertainty for hysteresis is $${\large s}_{hysteresis} = \mbox{Max} \left| Y_{upscale} - Y_{downscale} \right| \, / \sqrt{3}$$
Determining drift Drift in direct reading instruments is defined for a specific time interval of interest. The standard uncertainty for drift is $${\large s}_{drift} = \mbox{Max} \left| Y_0 - Y_t \right| \, / \sqrt{3}$$ where $$Y_0$$ and $$Y_t$$ are measurements at time zero and $$t$$, respectively.
Other biases Other sources of bias are discussed as follows:
Case study: Type A uncertainties from a gauge study A case study on type A uncertainty analysis from a gauge study is recommended as a guide for bringing together the principles and elements discussed in this section. The study in question characterizes the uncertainty of resistivity measurements made on silicon wafers. 