Next Page Previous Page Home Tools & Aids Search Handbook

Detailed Table of Contents for the Handbook

Chapter [1] [2] [3] [4] [5] [6] [7] [8]

1. Exploratory Data Analysis

[TOP] [NEXT]

  1. EDA Introduction  [1.1.]
    1. What is EDA?  [1.1.1.]
    2. How Does Exploratory Data Analysis differ from Classical Data Analysis?  [1.1.2.]
      1. Model  [1.1.2.1.]
      2. Focus  [1.1.2.2.]
      3. Techniques  [1.1.2.3.]
      4. Rigor  [1.1.2.4.]
      5. Data Treatment  [1.1.2.5.]
      6. Assumptions  [1.1.2.6.]
    3. How Does Exploratory Data Analysis Differ from Summary Analysis?  [1.1.3.]
    4. What are the EDA Goals?  [1.1.4.]
    5. The Role of Graphics  [1.1.5.]
    6. An EDA/Graphics Example  [1.1.6.]
    7. General Problem Categories  [1.1.7.]

  2. EDA Assumptions  [1.2.]
    1. Underlying Assumptions  [1.2.1.]
    2. Importance  [1.2.2.]
    3. Techniques for Testing Assumptions  [1.2.3.]
    4. Interpretation of 4-Plot  [1.2.4.]
    5. Consequences  [1.2.5.]
      1. Consequences of Non-Randomness  [1.2.5.1.]
      2. Consequences of Non-Fixed Location Parameter  [1.2.5.2.]
      3. Consequences of Non-Fixed Variation Parameter  [1.2.5.3.]
      4. Consequences Related to Distributional Assumptions  [1.2.5.4.]

  3. EDA Techniques  [1.3.]
    1. Introduction  [1.3.1.]
    2. Analysis Questions  [1.3.2.]
    3. Graphical Techniques: Alphabetic  [1.3.3.]
      1. Autocorrelation Plot  [1.3.3.1.]
        1. Autocorrelation Plot: Random Data  [1.3.3.1.1.]
        2. Autocorrelation Plot: Moderate Autocorrelation  [1.3.3.1.2.]
        3. Autocorrelation Plot: Strong Autocorrelation and Autoregressive Model  [1.3.3.1.3.]
        4. Autocorrelation Plot: Sinusoidal Model  [1.3.3.1.4.]
      2. Bihistogram  [1.3.3.2.]
      3. Block Plot  [1.3.3.3.]
      4. Bootstrap Plot  [1.3.3.4.]
      5. Box-Cox Linearity Plot  [1.3.3.5.]
      6. Box-Cox Normality Plot  [1.3.3.6.]
      7. Box Plot  [1.3.3.7.]
      8. Complex Demodulation Amplitude Plot  [1.3.3.8.]
      9. Complex Demodulation Phase Plot  [1.3.3.9.]
      10. Contour Plot  [1.3.3.10.]
        1. DOE Contour Plot  [1.3.3.10.1.]
      11. DOE Scatter Plot  [1.3.3.11.]
      12. DOE Mean Plot  [1.3.3.12.]
      13. DOE Standard Deviation Plot  [1.3.3.13.]
      14. Histogram  [1.3.3.14.]
        1. Histogram Interpretation: Normal  [1.3.3.14.1.]
        2. Histogram Interpretation: Symmetric, Non-Normal, Short-Tailed  [1.3.3.14.2.]
        3. Histogram Interpretation: Symmetric, Non-Normal, Long-Tailed  [1.3.3.14.3.]
        4. Histogram Interpretation: Symmetric and Bimodal  [1.3.3.14.4.]
        5. Histogram Interpretation: Bimodal Mixture of 2 Normals  [1.3.3.14.5.]
        6. Histogram Interpretation: Skewed (Non-Normal) Right  [1.3.3.14.6.]
        7. Histogram Interpretation: Skewed (Non-Symmetric) Left  [1.3.3.14.7.]
        8. Histogram Interpretation: Symmetric with Outlier  [1.3.3.14.8.]
      15. Lag Plot  [1.3.3.15.]
        1. Lag Plot: Random Data  [1.3.3.15.1.]
        2. Lag Plot: Moderate Autocorrelation  [1.3.3.15.2.]
        3. Lag Plot: Strong Autocorrelation and Autoregressive Model  [1.3.3.15.3.]
        4. Lag Plot: Sinusoidal Models and Outliers  [1.3.3.15.4.]
      16. Linear Correlation Plot  [1.3.3.16.]
      17. Linear Intercept Plot  [1.3.3.17.]
      18. Linear Slope Plot  [1.3.3.18.]
      19. Linear Residual Standard Deviation Plot  [1.3.3.19.]
      20. Mean Plot  [1.3.3.20.]
      21. Normal Probability Plot  [1.3.3.21.]
        1. Normal Probability Plot: Normally Distributed Data  [1.3.3.21.1.]
        2. Normal Probability Plot: Data Have Short Tails  [1.3.3.21.2.]
        3. Normal Probability Plot: Data Have Long Tails  [1.3.3.21.3.]
        4. Normal Probability Plot: Data are Skewed Right  [1.3.3.21.4.]
      22. Probability Plot  [1.3.3.22.]
      23. Probability Plot Correlation Coefficient Plot  [1.3.3.23.]
      24. Quantile-Quantile Plot  [1.3.3.24.]
      25. Run-Sequence Plot  [1.3.3.25.]
      26. Scatter Plot  [1.3.3.26.]
        1. Scatter Plot: No Relationship  [1.3.3.26.1.]
        2. Scatter Plot: Strong Linear (positive correlation) Relationship  [1.3.3.26.2.]
        3. Scatter Plot: Strong Linear (negative correlation) Relationship  [1.3.3.26.3.]
        4. Scatter Plot: Exact Linear (positive correlation) Relationship  [1.3.3.26.4.]
        5. Scatter Plot: Quadratic Relationship  [1.3.3.26.5.]
        6. Scatter Plot: Exponential Relationship  [1.3.3.26.6.]
        7. Scatter Plot: Sinusoidal Relationship (damped)  [1.3.3.26.7.]
        8. Scatter Plot: Variation of Y Does Not Depend on X (homoscedastic)  [1.3.3.26.8.]
        9. Scatter Plot: Variation of Y Does Depend on X (heteroscedastic)  [1.3.3.26.9.]
        10. Scatter Plot: Outlier  [1.3.3.26.10.]
        11. Scatterplot Matrix  [1.3.3.26.11.]
        12. Conditioning Plot  [1.3.3.26.12.]
      27. Spectral Plot  [1.3.3.27.]
        1. Spectral Plot: Random Data  [1.3.3.27.1.]
        2. Spectral Plot: Strong Autocorrelation and Autoregressive Model  [1.3.3.27.2.]
        3. Spectral Plot: Sinusoidal Model  [1.3.3.27.3.]
      28. Standard Deviation Plot  [1.3.3.28.]
      29. Star Plot  [1.3.3.29.]
      30. Weibull Plot  [1.3.3.30.]
      31. Youden Plot  [1.3.3.31.]
        1. DOE Youden Plot  [1.3.3.31.1.]
      32. 4-Plot  [1.3.3.32.]
      33. 6-Plot  [1.3.3.33.]
    4. Graphical Techniques: By Problem Category  [1.3.4.]
    5. Quantitative Techniques  [1.3.5.]
      1. Measures of Location  [1.3.5.1.]
      2. Confidence Limits for the Mean  [1.3.5.2.]
      3. Two-Sample t-Test for Equal Means  [1.3.5.3.]
        1. Data Used for Two-Sample t-Test  [1.3.5.3.1.]
      4. One-Factor ANOVA  [1.3.5.4.]
      5. Multi-factor Analysis of Variance  [1.3.5.5.]
      6. Measures of Scale  [1.3.5.6.]
      7. Bartlett's Test  [1.3.5.7.]
      8. Chi-Square Test for the Standard Deviation  [1.3.5.8.]
        1. Data Used for Chi-Square Test for the Standard Deviation  [1.3.5.8.1.]
      9. F-Test for Equality of Two Standard Deviations  [1.3.5.9.]
      10. Levene Test for Equality of Variances  [1.3.5.10.]
      11. Measures of Skewness and Kurtosis  [1.3.5.11.]
      12. Autocorrelation  [1.3.5.12.]
      13. Runs Test for Detecting Non-randomness  [1.3.5.13.]
      14. Anderson-Darling Test  [1.3.5.14.]
      15. Chi-Square Goodness-of-Fit Test  [1.3.5.15.]
      16. Kolmogorov-Smirnov Goodness-of-Fit Test  [1.3.5.16.]
      17. Grubbs' Test for Outliers  [1.3.5.17.]
      18. Yates Analysis  [1.3.5.18.]
        1. Defining Models and Prediction Equations  [1.3.5.18.1.]
        2. Important Factors  [1.3.5.18.2.]
    6. Probability Distributions  [1.3.6.]
      1. What is a Probability Distribution  [1.3.6.1.]
      2. Related Distributions  [1.3.6.2.]
      3. Families of Distributions  [1.3.6.3.]
      4. Location and Scale Parameters  [1.3.6.4.]
      5. Estimating the Parameters of a Distribution  [1.3.6.5.]
        1. Method of Moments  [1.3.6.5.1.]
        2. Maximum Likelihood  [1.3.6.5.2.]
        3. Least Squares  [1.3.6.5.3.]
        4. PPCC and Probability Plots  [1.3.6.5.4.]
      6. Gallery of Distributions  [1.3.6.6.]
        1. Normal Distribution  [1.3.6.6.1.]
        2. Uniform Distribution  [1.3.6.6.2.]
        3. Cauchy Distribution  [1.3.6.6.3.]
        4. t Distribution  [1.3.6.6.4.]
        5. F Distribution  [1.3.6.6.5.]
        6. Chi-Square Distribution  [1.3.6.6.6.]
        7. Exponential Distribution  [1.3.6.6.7.]
        8. Weibull Distribution  [1.3.6.6.8.]
        9. Lognormal Distribution  [1.3.6.6.9.]
        10. Fatigue Life Distribution  [1.3.6.6.10.]
        11. Gamma Distribution  [1.3.6.6.11.]
        12. Double Exponential Distribution  [1.3.6.6.12.]
        13. Power Normal Distribution  [1.3.6.6.13.]
        14. Power Lognormal Distribution  [1.3.6.6.14.]
        15. Tukey-Lambda Distribution  [1.3.6.6.15.]
        16. Extreme Value Type I Distribution  [1.3.6.6.16.]
        17. Beta Distribution  [1.3.6.6.17.]
        18. Binomial Distribution  [1.3.6.6.18.]
        19. Poisson Distribution  [1.3.6.6.19.]
      7. Tables for Probability Distributions  [1.3.6.7.]
        1. Cumulative Distribution Function of the Standard Normal Distribution  [1.3.6.7.1.]
        2. Upper Critical Values of the Student's-t Distribution  [1.3.6.7.2.]
        3. Upper Critical Values of the F Distribution  [1.3.6.7.3.]
        4. Critical Values of the Chi-Square Distribution  [1.3.6.7.4.]
        5. Critical Values of the t* Distribution  [1.3.6.7.5.]
        6. Critical Values of the Normal PPCC Distribution  [1.3.6.7.6.]

  4. EDA Case Studies  [1.4.]
    1. Case Studies Introduction  [1.4.1.]
    2. Case Studies  [1.4.2.]
      1. Normal Random Numbers  [1.4.2.1.]
        1. Background and Data  [1.4.2.1.1.]
        2. Graphical Output and Interpretation  [1.4.2.1.2.]
        3. Quantitative Output and Interpretation  [1.4.2.1.3.]
        4. Work This Example Yourself  [1.4.2.1.4.]
      2. Uniform Random Numbers  [1.4.2.2.]
        1. Background and Data  [1.4.2.2.1.]
        2. Graphical Output and Interpretation  [1.4.2.2.2.]
        3. Quantitative Output and Interpretation  [1.4.2.2.3.]
        4. Work This Example Yourself  [1.4.2.2.4.]
      3. Random Walk  [1.4.2.3.]
        1. Background and Data  [1.4.2.3.1.]
        2. Test Underlying Assumptions  [1.4.2.3.2.]
        3. Develop A Better Model  [1.4.2.3.3.]
        4. Validate New Model  [1.4.2.3.4.]
        5. Work This Example Yourself  [1.4.2.3.5.]
      4. Josephson Junction Cryothermometry  [1.4.2.4.]
        1. Background and Data  [1.4.2.4.1.]
        2. Graphical Output and Interpretation  [1.4.2.4.2.]
        3. Quantitative Output and Interpretation  [1.4.2.4.3.]
        4. Work This Example Yourself  [1.4.2.4.4.]
      5. Beam Deflections  [1.4.2.5.]
        1. Background and Data  [1.4.2.5.1.]
        2. Test Underlying Assumptions  [1.4.2.5.2.]
        3. Develop a Better Model  [1.4.2.5.3.]
        4. Validate New Model  [1.4.2.5.4.]
        5. Work This Example Yourself  [1.4.2.5.5.]
      6. Filter Transmittance  [1.4.2.6.]
        1. Background and Data  [1.4.2.6.1.]
        2. Graphical Output and Interpretation  [1.4.2.6.2.]
        3. Quantitative Output and Interpretation  [1.4.2.6.3.]
        4. Work This Example Yourself  [1.4.2.6.4.]
      7. Standard Resistor  [1.4.2.7.]
        1. Background and Data  [1.4.2.7.1.]
        2. Graphical Output and Interpretation  [1.4.2.7.2.]
        3. Quantitative Output and Interpretation  [1.4.2.7.3.]
        4. Work This Example Yourself  [1.4.2.7.4.]
      8. Heat Flow Meter 1  [1.4.2.8.]
        1. Background and Data  [1.4.2.8.1.]
        2. Graphical Output and Interpretation  [1.4.2.8.2.]
        3. Quantitative Output and Interpretation  [1.4.2.8.3.]
        4. Work This Example Yourself  [1.4.2.8.4.]
      9. Fatigue Life of Aluminum Alloy Specimens  [1.4.2.9.]
        1. Background and Data  [1.4.2.9.1.]
        2. Graphical Output and Interpretation  [1.4.2.9.2.]
      10. Ceramic Strength  [1.4.2.10.]
        1. Background and Data  [1.4.2.10.1.]
        2. Analysis of the Response Variable  [1.4.2.10.2.]
        3. Analysis of the Batch Effect  [1.4.2.10.3.]
        4. Analysis of the Lab Effect  [1.4.2.10.4.]
        5. Analysis of Primary Factors  [1.4.2.10.5.]
        6. Work This Example Yourself  [1.4.2.10.6.]
    3. References For Chapter 1: Exploratory Data Analysis  [1.4.3.]


2.   Measurement Process Characterization

[TOP] [NEXT] [PREV]

  1. Characterization  [2.1.]
    1. What are the issues for characterization?  [2.1.1.]
      1. Purpose  [2.1.1.1.]
      2. Reference base  [2.1.1.2.]
      3. Bias and Accuracy  [2.1.1.3.]
      4. Variability  [2.1.1.4.]
    2. What is a check standard?  [2.1.2.]
      1. Assumptions  [2.1.2.1.]
      2. Data collection  [2.1.2.2.]
      3. Analysis  [2.1.2.3.]

  2. Statistical control of a measurement process  [2.2.]
    1. What are the issues in controlling the measurement process?  [2.2.1.]
    2. How are bias and variability controlled?  [2.2.2.]
      1. Shewhart control chart  [2.2.2.1.]
        1. EWMA control chart  [2.2.2.1.1.]
      2. Data collection  [2.2.2.2.]
      3. Monitoring bias and long-term variability  [2.2.2.3.]
      4. Remedial actions  [2.2.2.4.]
    3. How is short-term variability controlled?  [2.2.3.]
      1. Control chart for standard deviations  [2.2.3.1.]
      2. Data collection  [2.2.3.2.]
      3. Monitoring short-term precision  [2.2.3.3.]
      4. Remedial actions  [2.2.3.4.]

  3. Calibration  [2.3.]
    1. Issues in calibration  [2.3.1.]
      1. Reference base  [2.3.1.1.]
      2. Reference standards  [2.3.1.2.]
    2. What is artifact (single-point) calibration?  [2.3.2.]
    3. What are calibration designs?  [2.3.3.]
      1. Elimination of special types of bias  [2.3.3.1.]
        1. Left-right (constant instrument) bias  [2.3.3.1.1.]
        2. Bias caused by instrument drift  [2.3.3.1.2.]
      2. Solutions to calibration designs  [2.3.3.2.]
        1. General matrix solutions to calibration designs  [2.3.3.2.1.]
      3. Uncertainties of calibrated values  [2.3.3.3.]
        1. Type A evaluations for calibration designs  [2.3.3.3.1.]
        2. Repeatability and level-2 standard deviations  [2.3.3.3.2.]
        3. Combination of repeatability and level-2 standard deviations  [2.3.3.3.3.]
        4. Calculation of standard deviations for 1,1,1,1 design  [2.3.3.3.4.]
        5. Type B uncertainty  [2.3.3.3.5.]
        6. Expanded uncertainties  [2.3.3.3.6.]
    4. Catalog of calibration designs  [2.3.4.]
      1. Mass weights  [2.3.4.1.]
        1. Design for 1,1,1  [2.3.4.1.1.]
        2. Design for 1,1,1,1  [2.3.4.1.2.]
        3. Design for 1,1,1,1,1  [2.3.4.1.3.]
        4. Design for 1,1,1,1,1,1  [2.3.4.1.4.]
        5. Design for 2,1,1,1  [2.3.4.1.5.]
        6. Design for 2,2,1,1,1  [2.3.4.1.6.]
        7. Design for 2,2,2,1,1  [2.3.4.1.7.]
        8. Design for 5,2,2,1,1,1  [2.3.4.1.8.]
        9. Design for 5,2,2,1,1,1,1  [2.3.4.1.9.]
        10. Design for 5,3,2,1,1,1  [2.3.4.1.10.]
        11. Design for 5,3,2,1,1,1,1  [2.3.4.1.11.]
        12. Design for 5,3,2,2,1,1,1  [2.3.4.1.12.]
        13. Design for 5,4,4,3,2,2,1,1  [2.3.4.1.13.]
        14. Design for 5,5,2,2,1,1,1,1  [2.3.4.1.14.]
        15. Design for 5,5,3,2,1,1,1  [2.3.4.1.15.]
        16. Design for 1,1,1,1,1,1,1,1 weights  [2.3.4.1.16.]
        17. Design for 3,2,1,1,1 weights  [2.3.4.1.17.]
        18. Design for 10 and 20 pound weights  [2.3.4.1.18.]
      2. Drift-elimination designs for gage blocks  [2.3.4.2.]
        1. Doiron 3-6 Design  [2.3.4.2.1.]
        2. Doiron 3-9 Design  [2.3.4.2.2.]
        3. Doiron 4-8 Design  [2.3.4.2.3.]
        4. Doiron 4-12 Design  [2.3.4.2.4.]
        5. Doiron 5-10 Design  [2.3.4.2.5.]
        6. Doiron 6-12 Design  [2.3.4.2.6.]
        7. Doiron 7-14 Design  [2.3.4.2.7.]
        8. Doiron 8-16 Design  [2.3.4.2.8.]
        9. Doiron 9-18 Design  [2.3.4.2.9.]
        10. Doiron 10-20 Design  [2.3.4.2.10.]
        11. Doiron 11-22 Design  [2.3.4.2.11.]
      3. Designs for electrical quantities  [2.3.4.3.]
        1. Left-right balanced design for 3 standard cells  [2.3.4.3.1.]
        2. Left-right balanced design for 4 standard cells  [2.3.4.3.2.]
        3. Left-right balanced design for 5 standard cells  [2.3.4.3.3.]
        4. Left-right balanced design for 6 standard cells  [2.3.4.3.4.]
        5. Left-right balanced design for 4 references and 4 test items  [2.3.4.3.5.]
        6. Design for 8 references and 8 test items  [2.3.4.3.6.]
        7. Design for 4 reference zeners and 2 test zeners  [2.3.4.3.7.]
        8. Design for 4 reference zeners and 3 test zeners  [2.3.4.3.8.]
        9. Design for 3 references and 1 test resistor  [2.3.4.3.9.]
        10. Design for 4 references and 1 test resistor  [2.3.4.3.10.]
      4. Roundness measurements  [2.3.4.4.]
        1. Single trace roundness design  [2.3.4.4.1.]
        2. Multiple trace roundness designs  [2.3.4.4.2.]
      5. Designs for angle blocks  [2.3.4.5.]
        1. Design for 4 angle blocks  [2.3.4.5.1.]
        2. Design for 5 angle blocks  [2.3.4.5.2.]
        3. Design for 6 angle blocks  [2.3.4.5.3.]
      6. Thermometers in a bath  [2.3.4.6.]
      7. Humidity standards  [2.3.4.7.]
        1. Drift-elimination design for 2 reference weights and 3 cylinders  [2.3.4.7.1.]
    5. Control of artifact calibration  [2.3.5.]
      1. Control of precision  [2.3.5.1.]
        1. Example of control chart for precision  [2.3.5.1.1.]
      2. Control of bias and long-term variability  [2.3.5.2.]
        1. Example of Shewhart control chart for mass calibrations  [2.3.5.2.1.]
        2. Example of EWMA control chart for mass calibrations  [2.3.5.2.2.]
    6. Instrument calibration over a regime  [2.3.6.]
      1. Models for instrument calibration  [2.3.6.1.]
      2. Data collection  [2.3.6.2.]
      3. Assumptions for instrument calibration  [2.3.6.3.]
      4. What can go wrong with the calibration procedure  [2.3.6.4.]
        1. Example of day-to-day changes in calibration  [2.3.6.4.1.]
      5. Data analysis and model validation  [2.3.6.5.]
        1. Data on load cell #32066  [2.3.6.5.1.]
      6. Calibration of future measurements  [2.3.6.6.]
      7. Uncertainties of calibrated values  [2.3.6.7.]
        1. Uncertainty for quadratic calibration using propagation of error  [2.3.6.7.1.]
        2. Uncertainty for linear calibration using check standards  [2.3.6.7.2.]
        3. Comparison of check standard analysis and propagation of error  [2.3.6.7.3.]
    7. Instrument control for linear calibration  [2.3.7.]
      1. Control chart for a linear calibration line  [2.3.7.1.]

  4. Gauge R & R studies  [2.4.]
    1. What are the important issues?  [2.4.1.]
    2. Design considerations  [2.4.2.]
    3. Data collection for time-related sources of variability  [2.4.3.]
      1. Simple design  [2.4.3.1.]
      2. 2-level nested design  [2.4.3.2.]
      3. 3-level nested design  [2.4.3.3.]
    4. Analysis of variability  [2.4.4.]
      1. Analysis of repeatability  [2.4.4.1.]
      2. Analysis of reproducibility  [2.4.4.2.]
      3. Analysis of stability  [2.4.4.3.]
        1. Example of calculations  [2.4.4.4.4.]
    5. Analysis of bias  [2.4.5.]
      1. Resolution  [2.4.5.1.]
      2. Linearity of the gauge  [2.4.5.2.]
      3. Drift  [2.4.5.3.]
      4. Differences among gauges  [2.4.5.4.]
      5. Geometry/configuration differences  [2.4.5.5.]
      6. Remedial actions and strategies  [2.4.5.6.]
    6. Quantifying uncertainties from a gauge study  [2.4.6.]

  5. Uncertainty analysis  [2.5.]
    1. Issues  [2.5.1.]
    2. Approach  [2.5.2.]
      1. Steps  [2.5.2.1.]
    3. Type A evaluations  [2.5.3.]
      1. Type A evaluations of random components  [2.5.3.1.]
        1. Type A evaluations of time-dependent effects  [2.5.3.1.1.]
        2. Measurement configuration within the laboratory  [2.5.3.1.2.]
      2. Material inhomogeneity  [2.5.3.2.]
        1. Data collection and analysis  [2.5.3.2.1.]
      3. Type A evaluations of bias  [2.5.3.3.]
        1. Inconsistent bias  [2.5.3.3.1.]
        2. Consistent bias  [2.5.3.3.2.]
        3. Bias with sparse data  [2.5.3.3.3.]
    4. Type B evaluations  [2.5.4.]
      1. Standard deviations from assumed distributions  [2.5.4.1.]
    5. Propagation of error considerations  [2.5.5.]
      1. Formulas for functions of one variable  [2.5.5.1.]
      2. Formulas for functions of two variables  [2.5.5.2.]
      3. Propagation of error for many variables  [2.5.5.3.]
    6. Uncertainty budgets and sensitivity coefficients  [2.5.6.]
      1. Sensitivity coefficients for measurements on the test item  [2.5.6.1.]
      2. Sensitivity coefficients for measurements on a check standard  [2.5.6.2.]
      3. Sensitivity coefficients for measurements from a 2-level design  [2.5.6.3.]
      4. Sensitivity coefficients for measurements from a 3-level design  [2.5.6.4.]
      5. Example of uncertainty budget  [2.5.6.5.]
    7. Standard and expanded uncertainties  [2.5.7.]
      1. Degrees of freedom  [2.5.7.1.]
    8. Treatment of uncorrected bias  [2.5.8.]
      1. Computation of revised uncertainty  [2.5.8.1.]

  6. Case studies  [2.6.]
    1. Gauge study of resistivity probes  [2.6.1.]
      1. Background and data  [2.6.1.1.]
        1. Database of resistivity measurements  [2.6.1.1.1.]
      2. Analysis and interpretation  [2.6.1.2.]
      3. Repeatability standard deviations  [2.6.1.3.]
      4. Effects of days and long-term stability  [2.6.1.4.]
      5. Differences among 5 probes  [2.6.1.5.]
      6. Run gauge study example using Dataplot™  [2.6.1.6.]
      7. Dataplot™ macros  [2.6.1.7.]
    2. Check standard for resistivity measurements  [2.6.2.]
      1. Background and data  [2.6.2.1.]
        1. Database for resistivity check standard  [2.6.2.1.1.]
      2. Analysis and interpretation  [2.6.2.2.]
        1. Repeatability and level-2 standard deviations  [2.6.2.2.1.]
      3. Control chart for probe precision  [2.6.2.3.]
      4. Control chart for bias and long-term variability  [2.6.2.4.]
      5. Run check standard example yourself  [2.6.2.5.]
      6. Dataplot™ macros  [2.6.2.6.]
    3. Evaluation of type A uncertainty  [2.6.3.]
      1. Background and data  [2.6.3.1.]
        1. Database of resistivity measurements  [2.6.3.1.1.]
        2. Measurements on wiring configurations  [2.6.3.1.2.]
      2. Analysis and interpretation  [2.6.3.2.]
        1. Difference between 2 wiring configurations  [2.6.3.2.1.]
      3. Run the type A uncertainty analysis using Dataplot™  [2.6.3.3.]
      4. Dataplot™ macros  [2.6.3.4.]
    4. Evaluation of type B uncertainty and propagation of error  [2.6.4.]

  7. References  [2.7.]


3.   Production Process Characterization

[TOP] [NEXT] [PREV]

  1. Introduction to Production Process Characterization  [3.1.]
    1. What is PPC?  [3.1.1.]
    2. What are PPC Studies Used For?  [3.1.2.]
    3. Terminology/Concepts  [3.1.3.]
      1. Distribution (Location, Spread and Shape)  [3.1.3.1.]
      2. Process Variability  [3.1.3.2.]
        1. Controlled/Uncontrolled Variation  [3.1.3.2.1.]
      3. Propagating Error  [3.1.3.3.]
      4. Populations and Sampling  [3.1.3.4.]
      5. Process Models  [3.1.3.5.]
      6. Experiments and Experimental Design  [3.1.3.6.]
    4. PPC Steps  [3.1.4.]

  2. Assumptions / Prerequisites  [3.2.]
    1. General Assumptions  [3.2.1.]
    2. Continuous Linear Model  [3.2.2.]
    3. Analysis of Variance Models (ANOVA)  [3.2.3.]
      1. One-Way ANOVA  [3.2.3.1.]
        1. One-Way Value-Splitting  [3.2.3.1.1.]
      2. Two-Way Crossed ANOVA  [3.2.3.2.]
        1. Two-way Crossed Value-Splitting Example  [3.2.3.2.1.]
      3. Two-Way Nested ANOVA  [3.2.3.3.]
        1. Two-Way Nested Value-Splitting Example  [3.2.3.3.1.]
    4. Discrete Models  [3.2.4.]

  3. Data Collection for PPC  [3.3.]
    1. Define Goals  [3.3.1.]
    2. Process Modeling  [3.3.2.]
    3. Define Sampling Plan  [3.3.3.]
      1. Identifying Parameters, Ranges and Resolution  [3.3.3.1.]
      2. Choosing a Sampling Scheme  [3.3.3.2.]
      3. Selecting Sample Sizes  [3.3.3.3.]
      4. Data Storage and Retrieval  [3.3.3.4.]
      5. Assign Roles and Responsibilities  [3.3.3.5.]

  4. Data Analysis for PPC  [3.4.]
    1. First Steps  [3.4.1.]
    2. Exploring Relationships  [3.4.2.]
      1. Response Correlations  [3.4.2.1.]
      2. Exploring Main Effects  [3.4.2.2.]
      3. Exploring First Order Interactions  [3.4.2.3.]
    3. Building Models  [3.4.3.]
      1. Fitting Polynomial Models  [3.4.3.1.]
      2. Fitting Physical Models  [3.4.3.2.]
    4. Analyzing Variance Structure  [3.4.4.]
    5. Assessing Process Stability  [3.4.5.]
    6. Assessing Process Capability  [3.4.6.]
    7. Checking Assumptions  [3.4.7.]

  5. Case Studies  [3.5.]
    1. Furnace Case Study  [3.5.1.]
      1. Background and Data  [3.5.1.1.]
      2. Initial Analysis of Response Variable  [3.5.1.2.]
      3. Identify Sources of Variation  [3.5.1.3.]
      4. Analysis of Variance  [3.5.1.4.]
      5. Final Conclusions  [3.5.1.5.]
      6. Work This Example Yourself  [3.5.1.6.]
    2. Machine Screw Case Study  [3.5.2.]
      1. Background and Data  [3.5.2.1.]
      2. Box Plots by Factors  [3.5.2.2.]
      3. Analysis of Variance  [3.5.2.3.]
      4. Throughput  [3.5.2.4.]
      5. Final Conclusions  [3.5.2.5.]
      6. Work This Example Yourself  [3.5.2.6.]

  6. References  [3.6.]


4.   Process Modeling - Detailed Table of Contents

[TOP] [NEXT] [PREV]

  1. Introduction to Process Modeling  [4.1.]
    1. What is process modeling?  [4.1.1.]
    2. What terminology do statisticians use to describe process models?  [4.1.2.]
    3. What are process models used for?  [4.1.3.]
      1. Estimation  [4.1.3.1.]
      2. Prediction  [4.1.3.2.]
      3. Calibration  [4.1.3.3.]
      4. Optimization  [4.1.3.4.]
    4. What are some of the different statistical methods for model building?  [4.1.4.]
      1. Linear Least Squares Regression  [4.1.4.1.]
      2. Nonlinear Least Squares Regression  [4.1.4.2.]
      3. Weighted Least Squares Regression  [4.1.4.3.]
      4. LOESS (aka LOWESS)  [4.1.4.4.]

  2. Underlying Assumptions for Process Modeling  [4.2.]
    1. What are the typical underlying assumptions in process modeling?  [4.2.1.]
      1. The process is a statistical process.  [4.2.1.1.]
      2. The means of the random errors are zero.  [4.2.1.2.]
      3. The random errors have a constant standard deviation.  [4.2.1.3.]
      4. The random errors follow a normal distribution.  [4.2.1.4.]
      5. The data are randomly sampled from the process.  [4.2.1.5.]
      6. The explanatory variables are observed without error.  [4.2.1.6.]

  3. Data Collection for Process Modeling  [4.3.]
    1. What is design of experiments (DOE)?  [4.3.1.]
    2. Why is experimental design important for process modeling?  [4.3.2.]
    3. What are some general design principles for process modeling?  [4.3.3.]
    4. I've heard some people refer to "optimal" designs, shouldn't I use those?  [4.3.4.]
    5. How can I tell if a particular experimental design is good for my application?  [4.3.5.]

  4. Data Analysis for Process Modeling  [4.4.]
    1. What are the basic steps for developing an effective process model?  [4.4.1.]
    2. How do I select a function to describe my process?  [4.4.2.]
      1. Incorporating Scientific Knowledge into Function Selection  [4.4.2.1.]
      2. Using the Data to Select an Appropriate Function  [4.4.2.2.]
      3. Using Methods that Do Not Require Function Specification  [4.4.2.3.]
    3. How are estimates of the unknown parameters obtained?  [4.4.3.]
      1. Least Squares  [4.4.3.1.]
      2. Weighted Least Squares  [4.4.3.2.]
    4. How can I tell if a model fits my data?  [4.4.4.]
      1. How can I assess the sufficiency of the functional part of the model?  [4.4.4.1.]
      2. How can I detect non-constant variation across the data?  [4.4.4.2.]
      3. How can I tell if there was drift in the measurement process?  [4.4.4.3.]
      4. How can I assess whether the random errors are independent from one to the next?  [4.4.4.4.]
      5. How can I test whether or not the random errors are distributed normally?  [4.4.4.5.]
      6. How can I test whether any significant terms are missing or misspecified in the functional part of the model?  [4.4.4.6.]
      7. How can I test whether all of the terms in the functional part of the model are necessary?  [4.4.4.7.]
    5. If my current model does not fit the data well, how can I improve it?  [4.4.5.]
      1. Updating the Function Based on Residual Plots  [4.4.5.1.]
      2. Accounting for Non-Constant Variation Across the Data  [4.4.5.2.]
      3. Accounting for Errors with a Non-Normal Distribution  [4.4.5.3.]

  5. Use and Interpretation of Process Models  [4.5.]
    1. What types of predictions can I make using the model?  [4.5.1.]
      1. How do I estimate the average response for a particular set of predictor variable values?  [4.5.1.1.]
      2. How can I predict the value and and estimate the uncertainty of a single response?  [4.5.1.2.]
    2. How can I use my process model for calibration?  [4.5.2.]
      1. Single-Use Calibration Intervals  [4.5.2.1.]
    3. How can I optimize my process using the process model?  [4.5.3.]

  6. Case Studies in Process Modeling  [4.6.]
    1. Load Cell Calibration  [4.6.1.]
      1. Background & Data  [4.6.1.1.]
      2. Selection of Initial Model  [4.6.1.2.]
      3. Model Fitting - Initial Model  [4.6.1.3.]
      4. Graphical Residual Analysis - Initial Model  [4.6.1.4.]
      5. Interpretation of Numerical Output - Initial Model  [4.6.1.5.]
      6. Model Refinement  [4.6.1.6.]
      7. Model Fitting - Model #2  [4.6.1.7.]
      8. Graphical Residual Analysis - Model #2  [4.6.1.8.]
      9. Interpretation of Numerical Output - Model #2  [4.6.1.9.]
      10. Use of the Model for Calibration  [4.6.1.10.]
      11. Work This Example Yourself  [4.6.1.11.]
    2. Alaska Pipeline  [4.6.2.]
      1. Background and Data  [4.6.2.1.]
      2. Check for Batch Effect  [4.6.2.2.]
      3. Initial Linear Fit  [4.6.2.3.]
      4. Transformations to Improve Fit and Equalize Variances  [4.6.2.4.]
      5. Weighting to Improve Fit  [4.6.2.5.]
      6. Compare the Fits  [4.6.2.6.]
      7. Work This Example Yourself  [4.6.2.7.]
    3. Ultrasonic Reference Block Study  [4.6.3.]
      1. Background and Data  [4.6.3.1.]
      2. Initial Non-Linear Fit  [4.6.3.2.]
      3. Transformations to Improve Fit  [4.6.3.3.]
      4. Weighting to Improve Fit  [4.6.3.4.]
      5. Compare the Fits  [4.6.3.5.]
      6. Work This Example Yourself  [4.6.3.6.]
    4. Thermal Expansion of Copper Case Study  [4.6.4.]
      1. Background and Data  [4.6.4.1.]
      2. Rational Function Models  [4.6.4.2.]
      3. Initial Plot of Data  [4.6.4.3.]
      4. Quadratic/Quadratic Rational Function Model  [4.6.4.4.]
      5. Cubic/Cubic Rational Function Model  [4.6.4.5.]
      6. Work This Example Yourself  [4.6.4.6.]

  7. References For Chapter 4: Process Modeling  [4.7.]

  8. Some Useful Functions for Process Modeling  [4.8.]
    1. Univariate Functions  [4.8.1.]
      1. Polynomial Functions  [4.8.1.1.]
        1. Straight Line  [4.8.1.1.1.]
        2. Quadratic Polynomial  [4.8.1.1.2.]
        3. Cubic Polynomial  [4.8.1.1.3.]
      2. Rational Functions  [4.8.1.2.]
        1. Constant / Linear Rational Function  [4.8.1.2.1.]
        2. Linear / Linear Rational Function  [4.8.1.2.2.]
        3. Linear / Quadratic Rational Function  [4.8.1.2.3.]
        4. Quadratic / Linear Rational Function  [4.8.1.2.4.]
        5. Quadratic / Quadratic Rational Function  [4.8.1.2.5.]
        6. Cubic / Linear Rational Function  [4.8.1.2.6.]
        7. Cubic / Quadratic Rational Function  [4.8.1.2.7.]
        8. Linear / Cubic Rational Function  [4.8.1.2.8.]
        9. Quadratic / Cubic Rational Function  [4.8.1.2.9.]
        10. Cubic / Cubic Rational Function  [4.8.1.2.10.]
        11. Determining m and n for Rational Function Models  [4.8.1.2.11.]


5.    Process Improvement

[TOP] [NEXT] [PREV]

  1. Introduction  [5.1.]
    1. What is experimental design?  [5.1.1.]
    2. What are the uses of DOE?  [5.1.2.]
    3. What are the steps of DOE?  [5.1.3.]

  2. Assumptions  [5.2.]
    1. Is the measurement system capable?  [5.2.1.]
    2. Is the process stable?  [5.2.2.]
    3. Is there a simple model?  [5.2.3.]
    4. Are the model residuals well-behaved?  [5.2.4.]

  3. Choosing an experimental design  [5.3.]
    1. What are the objectives?  [5.3.1.]
    2. How do you select and scale the process variables?  [5.3.2.]
    3. How do you select an experimental design?  [5.3.3.]
      1. Completely randomized designs  [5.3.3.1.]
      2. Randomized block designs  [5.3.3.2.]
        1. Latin square and related designs  [5.3.3.2.1.]
        2. Graeco-Latin square designs  [5.3.3.2.2.]
        3. Hyper-Graeco-Latin square designs  [5.3.3.2.3.]
      3. Full factorial designs  [5.3.3.3.]
        1. Two-level full factorial designs  [5.3.3.3.1.]
        2. Full factorial example  [5.3.3.3.2.]
        3. Blocking of full factorial designs  [5.3.3.3.3.]
      4. Fractional factorial designs  [5.3.3.4.]
        1. A 23-1 design (half of a 23)  [5.3.3.4.1.]
        2. Constructing the 23-1 half-fraction design  [5.3.3.4.2.]
        3. Confounding (also called aliasing)  [5.3.3.4.3.]
        4. Fractional factorial design specifications and design resolution  [5.3.3.4.4.]
        5. Use of fractional factorial designs  [5.3.3.4.5.]
        6. Screening designs  [5.3.3.4.6.]
        7. Summary tables of useful fractional factorial designs  [5.3.3.4.7.]
      5. Plackett-Burman designs  [5.3.3.5.]
      6. Response surface designs  [5.3.3.6.]
        1. Central Composite Designs (CCD)  [5.3.3.6.1.]
        2. Box-Behnken designs  [5.3.3.6.2.]
        3. Comparisons of response surface designs  [5.3.3.6.3.]
        4. Blocking a response surface design  [5.3.3.6.4.]
      7. Adding centerpoints  [5.3.3.7.]
      8. Improving fractional factorial design resolution  [5.3.3.8.]
        1. Mirror-Image foldover designs  [5.3.3.8.1.]
        2. Alternative foldover designs  [5.3.3.8.2.]
      9. Three-level full factorial designs  [5.3.3.9.]
      10. Three-level, mixed-level and fractional factorial designs  [5.3.3.10.]

  4. Analysis of DOE data  [5.4.]
    1. What are the steps in a DOE analysis?  [5.4.1.]
    2. How to "look" at DOE data  [5.4.2.]
    3. How to model DOE data  [5.4.3.]
    4. How to test and revise DOE models  [5.4.4.]
    5. How to interpret DOE results  [5.4.5.]
    6. How to confirm DOE results (confirmatory runs)  [5.4.6.]
    7. Examples of DOE's  [5.4.7.]
      1. Full factorial example  [5.4.7.1.]
      2. Fractional factorial example  [5.4.7.2.]
      3. Response surface model example  [5.4.7.3.]

  5. Advanced topics  [5.5.]
    1. What if classical designs don't work?  [5.5.1.]
    2. What is a computer-aided design?  [5.5.2.]
      1. D-Optimal designs  [5.5.2.1.]
      2. Repairing a design  [5.5.2.2.]
    3. How do you optimize a process?  [5.5.3.]
      1. Single response case  [5.5.3.1.]
        1. Single response: Path of steepest ascent  [5.5.3.1.1.]
        2. Single response: Confidence region for search path  [5.5.3.1.2.]
        3. Single response: Choosing the step length  [5.5.3.1.3.]
        4. Single response: Optimization when there is adequate quadratic fit  [5.5.3.1.4.]
        5. Single response: Effect of sampling error on optimal solution  [5.5.3.1.5.]
        6. Single response: Optimization subject to experimental region constraints  [5.5.3.1.6.]
      2. Multiple response case  [5.5.3.2.]
        1. Multiple responses: Path of steepest ascent  [5.5.3.2.1.]
        2. Multiple responses: The desirability approach  [5.5.3.2.2.]
        3. Multiple responses: The mathematical programming approach  [5.5.3.2.3.]
    4. What is a mixture design?  [5.5.4.]
      1. Mixture screening designs  [5.5.4.1.]
      2. Simplex-lattice designs  [5.5.4.2.]
      3. Simplex-centroid designs  [5.5.4.3.]
      4. Constrained mixture designs  [5.5.4.4.]
      5. Treating mixture and process variables together  [5.5.4.5.]
    5. How can I account for nested variation (restricted randomization)?  [5.5.5.]
    6. What are Taguchi designs?  [5.5.6.]
    7. What are John's 3/4 fractional factorial designs?  [5.5.7.]
    8. What are small composite designs?  [5.5.8.]
    9. An EDA approach to experimental design  [5.5.9.]
      1. Ordered data plot  [5.5.9.1.]
      2. DOE scatter plot  [5.5.9.2.]
      3. DOE mean plot  [5.5.9.3.]
      4. Interaction effects matrix plot  [5.5.9.4.]
      5. Block plot  [5.5.9.5.]
      6. DOE Youden plot  [5.5.9.6.]
      7. |Effects| plot  [5.5.9.7.]
        1. Statistical significance  [5.5.9.7.1.]
        2. Engineering significance  [5.5.9.7.2.]
        3. Numerical significance  [5.5.9.7.3.]
        4. Pattern significance  [5.5.9.7.4.]
      8. Half-normal probability plot  [5.5.9.8.]
      9. Cumulative residual standard deviation plot  [5.5.9.9.]
        1. Motivation: What is a Model?  [5.5.9.9.1.]
        2. Motivation: How do we Construct a Goodness-of-fit Metric for a Model?  [5.5.9.9.2.]
        3. Motivation: How do we Construct a Good Model?  [5.5.9.9.3.]
        4. Motivation: How do we Know When to Stop Adding Terms?  [5.5.9.9.4.]
        5. Motivation: What is the Form of the Model?  [5.5.9.9.5.]
        6. Motivation: What are the Advantages of the LinearCombinatoric Model?  [5.5.9.9.6.]
        7. Motivation: How do we use the Model to Generate Predicted Values?  [5.5.9.9.7.]
        8. Motivation: How do we Use the Model Beyond the Data Domain?  [5.5.9.9.8.]
        9. Motivation: What is the Best Confirmation Point for Interpolation?  [5.5.9.9.9.]
        10. Motivation: How do we Use the Model for Interpolation?  [5.5.9.9.10.]
        11. Motivation: How do we Use the Model for Extrapolation?  [5.5.9.9.11.]
      10. DOE contour plot  [5.5.9.10.]
        1. How to Interpret: Axes  [5.5.9.10.1.]
        2. How to Interpret: Contour Curves  [5.5.9.10.2.]
        3. How to Interpret: Optimal Response Value  [5.5.9.10.3.]
        4. How to Interpret: Best Corner  [5.5.9.10.4.]
        5. How to Interpret: Steepest Ascent/Descent  [5.5.9.10.5.]
        6. How to Interpret: Optimal Curve  [5.5.9.10.6.]
        7. How to Interpret: Optimal Setting  [5.5.9.10.7.]

  6. Case Studies  [5.6.]
    1. Eddy Current Probe Sensitivity Case Study  [5.6.1.]
      1. Background and Data  [5.6.1.1.]
      2. Initial Plots/Main Effects  [5.6.1.2.]
      3. Interaction Effects  [5.6.1.3.]
      4. Main and Interaction Effects: Block Plots  [5.6.1.4.]
      5. Estimate Main and Interaction Effects  [5.6.1.5.]
      6. Modeling and Prediction Equations  [5.6.1.6.]
      7. Intermediate Conclusions  [5.6.1.7.]
      8. Important Factors and Parsimonious Prediction  [5.6.1.8.]
      9. Validate the Fitted Model  [5.6.1.9.]
      10. Using the Fitted Model  [5.6.1.10.]
      11. Conclusions and Next Step  [5.6.1.11.]
      12. Work This Example Yourself  [5.6.1.12.]
    2. Sonoluminescent Light Intensity Case Study  [5.6.2.]
      1. Background and Data  [5.6.2.1.]
      2. Initial Plots/Main Effects  [5.6.2.2.]
      3. Interaction Effects  [5.6.2.3.]
      4. Main and Interaction Effects: Block Plots  [5.6.2.4.]
      5. Important Factors: Youden Plot  [5.6.2.5.]
      6. Important Factors: |Effects| Plot  [5.6.2.6.]
      7. Important Factors: Half-Normal Probability Plot  [5.6.2.7.]
      8. Cumulative Residual Standard Deviation Plot  [5.6.2.8.]
      9. Next Step: DOE Contour Plot  [5.6.2.9.]
      10. Summary of Conclusions  [5.6.2.10.]
      11. Work This Example Yourself  [5.6.2.11.]

  7. A Glossary of DOE Terminology  [5.7.]

  8. References  [5.8.]


6.   Process or Product Monitoring and Control

[TOP] [NEXT] [PREV]

  1. Introduction  [6.1.]
    1. How did Statistical Quality Control Begin?  [6.1.1.]
    2. What are Process Control Techniques?  [6.1.2.]
    3. What is Process Control?  [6.1.3.]
    4. What to do if the process is "Out of Control"?  [6.1.4.]
    5. What to do if "In Control" but Unacceptable?  [6.1.5.]
    6. What is Process Capability?  [6.1.6.]

  2. Test Product for Acceptability: Lot Acceptance Sampling  [6.2.]
    1. What is Acceptance Sampling?  [6.2.1.]
    2. What kinds of Lot Acceptance Sampling Plans (LASPs) are there?  [6.2.2.]
    3. How do you Choose a Single Sampling Plan?  [6.2.3.]
      1. Choosing a Sampling Plan: MIL Standard 105D  [6.2.3.1.]
      2. Choosing a Sampling Plan with a given OC Curve  [6.2.3.2.]
    4. What is Double Sampling?  [6.2.4.]
    5. What is Multiple Sampling?  [6.2.5.]
    6. What is a Sequential Sampling Plan?  [6.2.6.]
    7. What is Skip Lot Sampling?  [6.2.7.]

  3. Univariate and Multivariate Control Charts  [6.3.]
    1. What are Control Charts?  [6.3.1.]
    2. What are Variables Control Charts?  [6.3.2.]
      1. Shewhart X-bar and R and S Control Charts  [6.3.2.1.]
      2. Individuals Control Charts  [6.3.2.2.]
      3. Cusum Control Charts  [6.3.2.3.]
        1. Cusum Average Run Length  [6.3.2.3.1.]
      4. EWMA Control Charts  [6.3.2.4.]
    3. What are Attributes Control Charts?  [6.3.3.]
      1. Counts Control Charts  [6.3.3.1.]
      2. Proportions Control Charts  [6.3.3.2.]
    4. What are Multivariate Control Charts?  [6.3.4.]
      1. Hotelling Control Charts  [6.3.4.1.]
      2. Principal Components Control Charts  [6.3.4.2.]
      3. Multivariate EWMA Charts  [6.3.4.3.]

  4. Introduction to Time Series Analysis  [6.4.]
    1. Definitions, Applications and Techniques  [6.4.1.]
    2. What are Moving Average or Smoothing Techniques?  [6.4.2.]
      1. Single Moving Average  [6.4.2.1.]
      2. Centered Moving Average  [6.4.2.2.]
    3. What is Exponential Smoothing?  [6.4.3.]
      1. Single Exponential Smoothing  [6.4.3.1.]
      2. Forecasting with Single Exponential Smoothing  [6.4.3.2.]
      3. Double Exponential Smoothing  [6.4.3.3.]
      4. Forecasting with Double Exponential Smoothing(LASP)  [6.4.3.4.]
      5. Triple Exponential Smoothing  [6.4.3.5.]
      6. Example of Triple Exponential Smoothing  [6.4.3.6.]
      7. Exponential Smoothing Summary  [6.4.3.7.]
    4. Univariate Time Series Models  [6.4.4.]
      1. Sample Data Sets  [6.4.4.1.]
        1. Data Set of Monthly CO2 Concentrations  [6.4.4.1.1.]
        2. Data Set of Southern Oscillations  [6.4.4.1.2.]
      2. Stationarity  [6.4.4.2.]
      3. Seasonality  [6.4.4.3.]
        1. Seasonal Subseries Plot  [6.4.4.3.1.]
      4. Common Approaches to Univariate Time Series  [6.4.4.4.]
      5. Box-Jenkins Models  [6.4.4.5.]
      6. Box-Jenkins Model Identification  [6.4.4.6.]
        1. Model Identification for Southern Oscillations Data  [6.4.4.6.1.]
        2. Model Identification for the CO2 Concentrations Data  [6.4.4.6.2.]
        3. Partial Autocorrelation Plot  [6.4.4.6.3.]
      7. Box-Jenkins Model Estimation  [6.4.4.7.]
      8. Box-Jenkins Model Diagnostics  [6.4.4.8.]
        1. Box-Ljung Test  [6.4.4.8.1.]
      9. Example of Univariate Box-Jenkins Analysis  [6.4.4.9.]
      10. Box-Jenkins Analysis on Seasonal Data  [6.4.4.10.]
    5. Multivariate Time Series Models  [6.4.5.]
      1. Example of Multivariate Time Series Analysis  [6.4.5.1.]

  5. Tutorials  [6.5.]
    1. What do we mean by "Normal" data?  [6.5.1.]
    2. What do we do when data are "Non-normal"?  [6.5.2.]
    3. Elements of Matrix Algebra  [6.5.3.]
      1. Numerical Examples  [6.5.3.1.]
      2. Determinant and Eigenstructure  [6.5.3.2.]
    4. Elements of Multivariate Analysis  [6.5.4.]
      1. Mean Vector and Covariance Matrix  [6.5.4.1.]
      2. The Multivariate Normal Distribution  [6.5.4.2.]
      3. Hotelling's T squared  [6.5.4.3.]
        1. T2 Chart for Subgroup Averages -- Phase I  [6.5.4.3.1.]
        2. T2 Chart for Subgroup Averages -- Phase II  [6.5.4.3.2.]
        3. Chart for Individual Observations -- Phase I  [6.5.4.3.3.]
        4. Chart for Individual Observations -- Phase II  [6.5.4.3.4.]
        5. Charts for Controlling Multivariate Variability  [6.5.4.3.5.]
        6. Constructing Multivariate Charts  [6.5.4.3.6.]
    5. Principal Components  [6.5.5.]
      1. Properties of Principal Components  [6.5.5.1.]
      2. Numerical Example  [6.5.5.2.]

  6. Case Studies in Process Monitoring  [6.6.]
    1. Lithography Process  [6.6.1.]
      1. Background and Data  [6.6.1.1.]
      2. Graphical Representation of the Data  [6.6.1.2.]
      3. Subgroup Analysis  [6.6.1.3.]
      4. Shewhart Control Chart  [6.6.1.4.]
      5. Work This Example Yourself  [6.6.1.5.]
    2. Aerosol Particle Size  [6.6.2.]
      1. Background and Data  [6.6.2.1.]
      2. Model Identification  [6.6.2.2.]
      3. Model Estimation  [6.6.2.3.]
      4. Model Validation  [6.6.2.4.]
      5. Work This Example Yourself  [6.6.2.5.]

  7. References  [6.7.]


7.   Product and Process Comparisons

[TOP] [NEXT] [PREV]

  1. Introduction  [7.1.]
    1. What is the scope?  [7.1.1.]
    2. What assumptions are typically made?  [7.1.2.]
    3. What are statistical tests?  [7.1.3.]
      1. Critical values and p values  [7.1.3.1.]
    4. What are confidence intervals?  [7.1.4.]
    5. What is the relationship between a test and a confidence interval?  [7.1.5.]
    6. What are outliers in the data?  [7.1.6.]
    7. What are trends in sequential process or product data?  [7.1.7.]

  2. Comparisons based on data from one process  [7.2.]
    1. Do the observations come from a particular distribution?  [7.2.1.]
      1. Chi-square goodness-of-fit test  [7.2.1.1.]
      2. Kolmogorov- Smirnov test  [7.2.1.2.]
      3. Anderson-Darling and Shapiro-Wilk tests  [7.2.1.3.]
    2. Are the data consistent with the assumed process mean?  [7.2.2.]
      1. Confidence interval approach  [7.2.2.1.]
      2. Sample sizes required  [7.2.2.2.]
    3. Are the data consistent with a nominal standard deviation?  [7.2.3.]
      1. Confidence interval approach  [7.2.3.1.]
      2. Sample sizes required  [7.2.3.2.]
    4. Does the proportion of defectives meet requirements?  [7.2.4.]
      1. Confidence intervals  [7.2.4.1.]
      2. Sample sizes required  [7.2.4.2.]
    5. Does the defect density meet requirements?  [7.2.5.]
    6. What intervals contain a fixed percentage of the population values?  [7.2.6.]
      1. Approximate intervals that contain most of the population values  [7.2.6.1.]
      2. Percentiles  [7.2.6.2.]
      3. Tolerance intervals for a normal distribution  [7.2.6.3.]
      4. Tolerance intervals based on the largest and smallest observations  [7.2.6.4.]

  3. Comparisons based on data from two processes  [7.3.]
    1. Do two processes have the same mean?  [7.3.1.]
      1. Analysis of paired observations  [7.3.1.1.]
      2. Confidence intervals for differences between means  [7.3.1.2.]
    2. Do two processes have the same standard deviation?  [7.3.2.]
    3. How can we determine whether two processes produce the same proportion of defectives?  [7.3.3.]
    4. Assuming the observations are failure times, are the failure rates (or Mean Times To Failure) for two distributions the same?  [7.3.4.]
    5. Do two arbitrary processes have the same central tendency?  [7.3.5.]

  4. Comparisons based on data from more than two processes  [7.4.]
    1. How can we compare several populations with unknown distributions (the Kruskal-Wallis test)?  [7.4.1.]
    2. Assuming the observations are normal, do the processes have the same variance?  [7.4.2.]
    3. Are the means equal?  [7.4.3.]
      1. 1-Way ANOVA overview  [7.4.3.1.]
      2. The 1-way ANOVA model and assumptions  [7.4.3.2.]
      3. The ANOVA table and tests of hypotheses about means  [7.4.3.3.]
      4. 1-Way ANOVA calculations  [7.4.3.4.]
      5. Confidence intervals for the difference of treatment means  [7.4.3.5.]
      6. Assessing the response from any factor combination  [7.4.3.6.]
      7. The two-way ANOVA  [7.4.3.7.]
      8. Models and calculations for the two-way ANOVA  [7.4.3.8.]
    4. What are variance components?  [7.4.4.]
    5. How can we compare the results of classifying according to several categories?  [7.4.5.]
    6. Do all the processes have the same proportion of defects?  [7.4.6.]
    7. How can we make multiple comparisons?  [7.4.7.]
      1. Tukey's method  [7.4.7.1.]
      2. Scheffe's method  [7.4.7.2.]
      3. Bonferroni's method  [7.4.7.3.]
      4. Comparing multiple proportions: The Marascuillo procedure  [7.4.7.4.]

  5. References  [7.5.]


8.   Assessing Product Reliability

[TOP] [PREV]


  1. Introduction  [8.1.]
    1. Why is the assessment and control of product reliability important?  [8.1.1.]
      1. Quality versus reliability  [8.1.1.1.]
      2. Competitive driving factors  [8.1.1.2.]
      3. Safety and health considerations  [8.1.1.3.]
    2. What are the basic terms and models used for reliability evaluation?  [8.1.2.]
      1. Repairable systems, non-repairable populations and lifetime distribution models  [8.1.2.1.]
      2. Reliability or survival function  [8.1.2.2.]
      3. Failure (or hazard) rate  [8.1.2.3.]
      4. "Bathtub" curve  [8.1.2.4.]
      5. Repair rate or ROCOF  [8.1.2.5.]
    3. What are some common difficulties with reliability data and how are they overcome?  [8.1.3.]
      1. Censoring  [8.1.3.1.]
      2. Lack of failures  [8.1.3.2.]
    4. What is "physical acceleration" and how do we model it?  [8.1.4.]
    5. What are some common acceleration models?  [8.1.5.]
      1. Arrhenius  [8.1.5.1.]
      2. Eyring  [8.1.5.2.]
      3. Other models  [8.1.5.3.]
    6. What are the basic lifetime distribution models used for non-repairable populations?  [8.1.6.]
      1. Exponential  [8.1.6.1.]
      2. Weibull  [8.1.6.2.]
      3. Extreme value distributions  [8.1.6.3.]
      4. Lognormal  [8.1.6.4.]
      5. Gamma  [8.1.6.5.]
      6. Fatigue life (Birnbaum-Saunders)  [8.1.6.6.]
      7. Proportional hazards model  [8.1.6.7.]
    7. What are some basic repair rate models used for repairable systems?  [8.1.7.]
      1. Homogeneous Poisson Process (HPP)  [8.1.7.1.]
      2. Non-Homogeneous Poisson Process (NHPP) - power law  [8.1.7.2.]
      3. Exponential law  [8.1.7.3.]
    8. How can you evaluate reliability from the "bottom-up" (component failure mode to system failure rate)?  [8.1.8.]
      1. Competing risk model  [8.1.8.1.]
      2. Series model  [8.1.8.2.]
      3. Parallel or redundant model  [8.1.8.3.]
      4. R out of N model  [8.1.8.4.]
      5. Standby model  [8.1.8.5.]
      6. Complex systems  [8.1.8.6.]
    9. How can you model reliability growth?  [8.1.9.]
      1. NHPP power law  [8.1.9.1.]
      2. Duane plots  [8.1.9.2.]
      3. NHPP exponential law  [8.1.9.3.]
    10. How can Bayesian methodology be used for reliability evaluation?  [8.1.10.]

  2. Assumptions/Prerequisites  [8.2.]
    1. How do you choose an appropriate life distribution model?  [8.2.1.]
      1. Based on failure mode  [8.2.1.1.]
      2. Extreme value argument  [8.2.1.2.]
      3. Multiplicative degradation argument  [8.2.1.3.]
      4. Fatigue life (Birnbaum-Saunders) model  [8.2.1.4.]
      5. Empirical model fitting - distribution free (Kaplan-Meier) approach  [8.2.1.5.]
    2. How do you plot reliability data?  [8.2.2.]
      1. Probability plotting  [8.2.2.1.]
      2. Hazard and cum hazard plotting  [8.2.2.2.]
      3. Trend and growth plotting (Duane plots)  [8.2.2.3.]
    3. How can you test reliability model assumptions?  [8.2.3.]
      1. Visual tests  [8.2.3.1.]
      2. Goodness of fit tests  [8.2.3.2.]
      3. Likelihood ratio tests  [8.2.3.3.]
      4. Trend tests  [8.2.3.4.]
    4. How do you choose an appropriate physical acceleration model?  [8.2.4.]
    5. What models and assumptions are typically made when Bayesian methods are used for reliability evaluation?  [8.2.5.]

  3. Reliability Data Collection  [8.3.]
    1. How do you plan a reliability assessment test?  [8.3.1.]
      1. Exponential life distribution (or HPP model) tests  [8.3.1.1.]
      2. Lognormal or Weibull tests  [8.3.1.2.]
      3. Reliability growth (Duane model)  [8.3.1.3.]
      4. Accelerated life tests  [8.3.1.4.]
      5. Bayesian gamma prior model  [8.3.1.5.]

  4. Reliability Data Analysis  [8.4.]
    1. How do you estimate life distribution parameters from censored data?  [8.4.1.]
      1. Graphical estimation  [8.4.1.1.]
      2. Maximum likelihood estimation  [8.4.1.2.]
      3. A Weibull maximum likelihood estimation example  [8.4.1.3.]
    2. How do you fit an acceleration model?  [8.4.2.]
      1. Graphical estimation  [8.4.2.1.]
      2. Maximum likelihood  [8.4.2.2.]
      3. Fitting models using degradation data instead of failures  [8.4.2.3.]
    3. How do you project reliability at use conditions?  [8.4.3.]
    4. How do you compare reliability between two or more populations?  [8.4.4.]
    5. How do you fit system repair rate models?  [8.4.5.]
      1. Constant repair rate (HPP/exponential) model  [8.4.5.1.]
      2. Power law (Duane) model  [8.4.5.2.]
      3. Exponential law model  [8.4.5.3.]
    6. How do you estimate reliability using the Bayesian gamma prior model?  [8.4.6.]
    7. References For Chapter 8: Assessing Product Reliability  [8.4.7.]

Home Tools & Aids Search Handbook Previous Page Next Page