Next Page Previous Page Home Tools & Aids Search Handbook
7. Product and Process Comparisons
7.4. Comparisons based on data from more than two processes

7.4.5.

How can we compare the results of classifying according to several categories?

Contingency Table approach When items are classified according to two or more criteria, it is often of interest to decide whether these criteria act independently of one another.

For example, suppose we wish to classify defects found in wafers produced in a manufacturing plant, first according to the type of defect and, second, according to the production shift during which the wafers were produced. If the proportions of the various types of defects are constant from shift to shift, then classification by defects is independent of the classification by production shift. On the other hand, if the proportions of the various defects vary from shift to shift, then the classification by defects depends upon or is contingent upon the shift classification and the classifications are dependent.

In the process of investigating whether one method of classification is contingent upon another, it is customary to display the data by using a cross classification in an array consisting of \(r\) rows and \(c\) columns called a contingency table. A contingency table consists of \(r \times c\) cells representing the \(r \times c\) possible outcomes in the classification process. Let us construct an industrial case.

Industrial example A total of 309 wafer defects were recorded and the defects were classified as being one of four types, \(A, \, B, \, C,\) or \(D\). At the same time each wafer was identified according to the production shift in which it was manufactured, 1, 2, or 3.
Contingency table classifying defects in wafers according to type and production shift These counts are presented in the following table.


  Type of Defects

Shift \(A\) \(B\) \(C\) \(D\) Total

1 15(22.51) 21(20.99) 45(38.94) 13(11.56) 94
2 26(22.9) 31(21.44) 34(39.77) 5(11.81) 96
3 33(28.50) 17(26.57) 49(49.29) 20(14.63) 119

Total 74 69 128 38 309

(Note: the numbers in parentheses are the expected cell frequencies).

Column probabilities Let \(p_A\) be the probability that a defect will be of type A. Likewise, define \(p_B, \, p_C,\) and \(p_D\) as the probabilities of observing the other three types of defects. These probabilities, which are called the column probabilities, will satisfy the requirement $$ p_A + p_B + p_C + p_D = 1 \, . $$
Row probabilities By the same token, let \(p_i (i = 1, \, 2, \, 3)\) be the row probability that a defect will have occurred during shift \(i\), where $$ p_1 + p_2 + p_3 = 1 \, . $$
Multiplicative Law of Probability Then if the two classifications are independent of each other, a cell probability will equal the product of its respective row and column probabilities in accordance with the Multiplicative Law of Probability.
Example of obtaining column and row probabilities For example, the probability that a particular defect will occur in shift 1 and is of type \(A\) is \((p_1)(p_A)\). While the numerical values of the cell probabilities are unspecified, the null hypothesis states that each cell probability will equal the product of its respective row and column probabilities. This condition implies independence of the two classifications. The alternative hypothesis is that this equality does not hold for at least one cell.

In other words, we state the null hypothesis as \(\mbox{H}_0\): the two classifications are independent, while the alternative hypothesis is \(\mbox{H}_1\): the classifications are dependent.

To obtain the observed column probability, divide the column total by the grand total, \(n\). Denoting the total of column \(j\) as \(c_j\), we get $$ \begin{eqnarray} \hat{p}_A = \frac{c_1}{n} = \frac{74}{309} & \,\,\,\,\, & \hat{p}_C = \frac{c_3}{n} = \frac{128}{309} \\ & & \\ \hat{p}_B = \frac{c_2}{n} = \frac{69}{309} & \,\,\,\,\, & \hat{p}_D = \frac{c_4}{n} = \frac{38}{309} \, .\\ \end{eqnarray} $$ Similarly, the row probabilities \(p_1, \, p_2,\) and \(p_3\) are estimated by dividing the row totals \(r_1, \, r_2,\) and \(r_3\) by the grand total \(n\), respectively. $$ \begin{eqnarray} \hat{p_1} & = & \frac{r_1}{n} = \frac{94}{309} \\ & & \\ \hat{p_2} & = & \frac{r_2}{n} = \frac{96}{309} \\ & & \\ \hat{p_3} & = & \frac{r_3}{n} = \frac{119}{309} \\ \end{eqnarray} $$

Expected cell frequencies Denote the observed frequency of the cell in row \(i\) and column \(j\) of the contingency table by \(n_{ij}\). Then we have $$ \hat{E}(n_{ij}) = n(\hat{p}_i \, \hat{p}_j) = n \, \left( \frac{r_i}{n} \right) \left( \frac{c_j}{n} \right) = \frac{r_i \cdot c_j}{n} \, . $$
Estimated expected cell frequency when \(\mbox{H}_0\) is true In other words, when the row and column classifications are independent, the estimated expected value of the observed cell frequency \(n_{ij}\) in an \(r \times c\) contingency table is equal to the product of its respective row and column totals divided by the total frequency. $$ \hat{E}(n_{ij}) = \frac{r_i \cdot c_j}{n} $$ The estimated cell frequencies are shown in parentheses in the contingency table above.
Test statistic From here we use the expected and observed frequencies shown in the table to calculate the value of the test statistic. $$ \begin{eqnarray} \chi^2 & = & \sum_{i=1}^3 \sum_{j=1}^4 \frac{[n_{ij} - \hat{E}(n_{ij})]^2}{\hat{E}(n_{ij})} \\ & & \\ \chi^2 & = & \frac{(15 - 22.51)^2}{22.51} + \frac{(26 - 22.99)^2}{22.99} + \cdots + \frac{(20 - 14.63)^2}{14.63} = 19.18 \\ \end{eqnarray} $$

Degrees of freedom = \((r-1)(c-1)\) The next step is to find the appropriate number of degrees of freedom associated with the test statistic. Leaving out the details of the derivation, we state the result:
The number of degrees of freedom associated with a contingency table consisting of \(r\) rows and \(c\) columns is \((r-1)(c-1)\).
So for our example we have (3-1) (4-1) = 6 degrees of freedom.
Testing the null hypothesis In order to test the null hypothesis, we compare the test statistic with the critical value of \(\chi_{1 - \alpha}^2\) at a selected value of \(\alpha\). Let us use \(\alpha\) = 0.05. Then the critical value is \(\chi_{0.95, \, 6}^2\) = 12.5916 (see the chi square table in Chapter 1). Since the test statistic of 19.18 exceeds the critical value, we reject the null hypothesis and conclude that there is significant evidence that the proportions of the different defect types vary from shift to shift. In this case, the \(p\)-value of the test statistic is 0.00387.
Home Tools & Aids Search Handbook Previous Page Next Page