Measurement Process Characterization
2.1.1. What are the issues for characterization?
|Definition of Accuracy and Bias||Accuracy is a qualitative term referring to whether there is agreement between a measurement made on an object and its true (target or reference) value. Bias is a quantitative term describing the difference between the average of measurements made on the same object and its true value. In particular, for a measurement laboratory, bias is the difference (generally unknown) between a laboratory's average value (over time) for a test item and the average that would be achieved by the reference laboratory if it undertook the same measurements on the same test item.|
|Depiction of bias and unbiased measurements||
|Identification of bias||
Bias in a measurement process can be identified by:
|Reduction of bias||
Bias can be eliminated or reduced by calibration of standards
and/or instruments. Because of costs and time constraints,
the majority of calibrations are performed by secondary or
tertiary laboratories and are related to the reference base
via a chain of intercomparisons that start at the reference laboratory.
Bias can also be reduced by corrections to in-house measurements based on comparisons with artifacts or instruments circulated for that purpose (reference materials).
Errors that contribute to bias can be present
even where all equipment and standards are properly
calibrated and under control. Temperature probably has
the most potential for introducing this type of bias
into the measurements. For example, a constant heat
source will introduce serious errors in dimensional
measurements of metal objects. Temperature affects
chemical and electrical measurements as well.
Generally speaking, errors of this type can be identified only by those who are thoroughly familiar with the measurement technology. The reader is advised to consult the technical literature and experts in the field for guidance.