Next Page Previous Page Home Tools & Aids Search Handbook


1. Exploratory Data Analysis
1.3. EDA Techniques
1.3.5. Quantitative Techniques

1.3.5.15.

Chi-Square Goodness-of-Fit Test

Purpose:
Test for distributional adequacy
The chi-square test (Snedecor and Cochran, 1989) is used to test if a sample of data came from a population with a specific distribution.

An attractive feature of the chi-square goodness-of-fit test is that it can be applied to any univariate distribution for which you can calculate the cumulative distribution function. The chi-square goodness-of-fit test is applied to binned data (i.e., data put into classes). This is actually not a restriction since for non-binned data you can simply calculate a histogram or frequency table before generating the chi-square test. However, the value of the chi-square test statistic are dependent on how the data is binned. Another disadvantage of the chi-square test is that it requires a sufficient sample size in order for the chi-square approximation to be valid.

The chi-square test is an alternative to the Anderson-Darling and Kolmogorov-Smirnov goodness-of-fit tests. The chi-square goodness-of-fit test can be applied to discrete distributions such as the binomial and the Poisson. The Kolmogorov-Smirnov and Anderson-Darling tests are restricted to continuous distributions.

Additional discussion of the chi-square goodness-of-fit test is contained in the product and process comparisons chapter (chapter 7).

Definition The chi-square test is defined for the hypothesis:

H0: The data follow a specified distribution.
Ha: The data do not follow the specified distribution.
Test Statistic: For the chi-square goodness-of-fit computation, the data are divided into k bins and the test statistic is defined as
    \[ \chi^{2} = \sum_{i=1}^{k}(O_{i} - E_{i})^{2}/E_{i} \]
where \(O_{i}\) is the observed frequency for bin i and \(E_{i}\) is the expected frequency for bin i. The expected frequency is calculated by
    \[ E_{i} = N(F(Y_{u}) - F(Y_{l})) \]
where F is the cumulative distribution function for the distribution being tested, Yu is the upper limit for class i, Yl is the lower limit for class i, and N is the sample size.

This test is sensitive to the choice of bins. There is no optimal choice for the bin width (since the optimal bin width depends on the distribution). Most reasonable choices should produce similar, but not identical, results. For the chi-square approximation to be valid, the expected frequency should be at least 5. This test is not valid for small samples, and if some of the counts are less than five, you may need to combine some bins in the tails.

Significance Level: α
Critical Region: The test statistic follows, approximately, a chi-square distribution with (k - c) degrees of freedom where k is the number of non-empty cells and c = the number of estimated parameters (including location and scale parameters and shape parameters) for the distribution + 1. For example, for a 3-parameter Weibull distribution, c = 4.

Therefore, the hypothesis that the data are from a population with the specified distribution is rejected if

    \[ \chi^2 > \chi^2_{1-\alpha, \, k-c} \]
where \(\chi^2_{1-\alpha, \, k-c}\) is the chi-square critical value with k - c degrees of freedom and significance level α.
Chi-Square Test Example
We generated 1,000 random numbers for normal, double exponential, t with 3 degrees of freedom, and lognormal distributions. In all cases, a chi-square test with k = 32 bins was applied to test for normally distributed data. Because the normal distribution has two parameters, c = 2 + 1 = 3

The normal random numbers were stored in the variable Y1, the double exponential random numbers were stored in the variable Y2, the t random numbers were stored in the variable Y3, and the lognormal random numbers were stored in the variable Y4.

H0:  the data are normally distributed
Ha:  the data are not normally distributed  

Y1 Test statistic:  Χ 2 =   32.256
Y2 Test statistic:  Χ 2 =   91.776
Y3 Test statistic:  Χ 2 =  101.488
Y4 Test statistic:  Χ 2 = 1085.104

Significance level:  α = 0.05
Degrees of freedom:  k - c = 32 - 3 = 29
Critical value:  Χ 21-α, k-c = 42.557
Critical region: Reject H0 if Χ 2 > 42.557
As we would hope, the chi-square test fails to reject the null hypothesis for the normally distributed data set and rejects the null hypothesis for the three non-normal data sets.
Questions The chi-square test can be used to answer the following types of questions:
  • Are the data from a normal distribution?
  • Are the data from a log-normal distribution?
  • Are the data from a Weibull distribution?
  • Are the data from an exponential distribution?
  • Are the data from a logistic distribution?
  • Are the data from a binomial distribution?
Importance Many statistical tests and procedures are based on specific distributional assumptions. The assumption of normality is particularly common in classical statistical tests. Much reliability modeling is based on the assumption that the distribution of the data follows a Weibull distribution.

There are many non-parametric and robust techniques that are not based on strong distributional assumptions. By non-parametric, we mean a technique, such as the sign test, that is not based on a specific distributional assumption. By robust, we mean a statistical technique that performs well under a wide range of distributional assumptions. However, techniques based on specific distributional assumptions are in general more powerful than these non-parametric and robust techniques. By power, we mean the ability to detect a difference when that difference actually exists. Therefore, if the distributional assumption can be confirmed, the parametric techniques are generally preferred.

If you are using a technique that makes a normality (or some other type of distributional) assumption, it is important to confirm that this assumption is in fact justified. If it is, the more powerful parametric techniques can be used. If the distributional assumption is not justified, a non-parametric or robust technique may be required.

Related Techniques Anderson-Darling Goodness-of-Fit Test
Kolmogorov-Smirnov Test
Shapiro-Wilk Normality Test
Probability Plots
Probability Plot Correlation Coefficient Plot
Software Some general purpose statistical software programs provide a chi-square goodness-of-fit test for at least some of the common distributions. Both Dataplot code and R code can be used to generate the analyses in this section.
Home Tools & Aids Search Handbook Previous Page Next Page