Next Page Previous Page Home Tools & Aids Search Handbook
2. Measurement Process Characterization
2.3. Calibration


Instrument calibration over a regime

Topics This section discusses the creation of a calibration curve for calibrating instruments (gauges) whose responses cover a large range. Topics are:
Purpose of instrument calibration Instrument calibration is intended to eliminate or reduce bias in an instrument's readings over a range for all continuous values. For this purpose, reference standards with known values for selected points covering the range of interest are measured with the instrument in question. Then a functional relationship is established between the values of the standards and the corresponding measurements. There are two basic situations.
Instruments which require correction for bias
  • The instrument reads in the same units as the reference standards. The purpose of the calibration is to identify and eliminate any bias in the instrument relative to the defined unit of measurement. For example, optical imaging systems that measure the width of lines on semiconductors read in micrometers, the unit of interest. Nonetheless, these instruments must be calibrated to values of reference standards if line width measurements across the industry are to agree with each other.
Instruments whose measurements act as surrogates for other measurements
  • The instrument reads in different units than the reference standards. The purpose of the calibration is to convert the instrument readings to the units of interest. An example is densitometer measurements that act as surrogates for measurements of radiation dosage. For this purpose, reference standards are irradiated at several dosage levels and then measured by radiometry. The same reference standards are measured by densitometer. The calibrated results of future densitometer readings on medical devices are the basis for deciding if the devices have been sterilized at the proper radiation level.
Basic steps for correcting the instrument for bias The calibration method is the same for both situations and requires the following basic steps:
  • Selection of reference standards with known values to cover the range of interest.
  • Measurements on the reference standards with the instrument to be calibrated.
  • Functional relationship between the measured and known values of the reference standards (usually a least-squares fit to the data) called a calibration curve.
  • Correction of all measurements by the inverse of the calibration curve.
Schematic example of a calibration curve and resulting value A schematic explanation is provided by the figure below for load cell calibration. The loadcell measurements (shown as *) are plotted on the y-axis against the corresponding values of known load shown on the x-axis.

A quadratic fit to the loadcell data produces the calibration curve that is shown as the solid line. For a future measurement with the load cell, Y' = 1.344 on the y-axis, a dotted line is drawn through Y' parallel to the x-axis. At the point where it intersects the calibration curve, another dotted line is drawn parallel to the y-axis. Its point of intersection with the x-axis at X' = 13.417 is the calibrated value. Quadratic calibration curve for loadcell

Home Tools & Aids Search Handbook Previous Page Next Page