2.
Measurement Process Characterization
2.4. Gauge R & R studies 2.4.5. Analysis of bias


Definition of linearity for gauge studies  Linearity is given a narrow interpretation in this Handbook to indicate that gauge response increases in equal increments to equal increments of stimulus, or, if the gauge is biased, that the bias remains constant throughout the course of the measurement process.  
Data collection and repetitions  A determination of linearity requires \(Q\) \((Q > 4)\) reference standards that cover the range of interest in fairly equal increments and \(J\) \((J > 1)\) measurements on each reference standard. One measurement is made on each of the reference standards, and the process is repeated \(J\) times.  
Plot of the data  A test of linearity starts with a plot of the measured values versus corresponding values of the reference standards to obtain an indication of whether or not the points fall on a straight line with slope equal to 1  indicating linearity.  
Leastsquares estimates of bias and slope  A leastsquares fit of the data to the model $$ Y = a + bX + \mbox{measurement error} $$ where \(Y\) is the measurement result and \(X\) is the value of the reference standard, produces an estimate of the intercept, \(a\), and the slope, \(b\).  
Output from software package 
The intercept and bias are estimated using a statistical software
package that should provide the following information:


Test for linearity  Tests for the slope and bias are described in the section on instrument calibration. If the slope is different from one, the gauge is nonlinear and requires calibration or repair. If the intercept is different from zero, the gauge has a bias.  
Causes of nonlinearity 
The reference manual on Measurement Systems Analysis
(MSA) lists possible causes of
gauge nonlinearity that should be investigated if the gauge shows
symptoms of nonlinearity.


Note  on artifact calibration  The requirement of linearity for artifact calibration is not so stringent. Where the gauge is used as a comparator for measuring small differences among test items and reference standards of the same nominal size, as with calibration designs, the only requirement is that the gauge be linear over the small onscale range needed to measure both the reference standard and the test item.  
Situation where the calibration of the gauge is neglected  Sometimes it is not economically feasible to correct for the calibration of the gauge (Turgel and Vecchia). In this case, the bias that is incurred by neglecting the calibration is estimated as a component of uncertainty. 