3.
Production
Process Characterization
3.3. Data Collection for PPC 3.3.3. Define Sampling Plan


Consider these things when selecting a sample size 
When choosing a sample size, we must consider the following
issues:


Cost of taking samples  The cost of sampling issue helps us determine how precise our estimates should be. As we will see below, when choosing sample sizes we need to select risk values. If the decisions we will make from the sampling activity are very valuable, then we will want low risk values and hence larger sample sizes.  
Prior information  If our process has been studied before, we can use that prior information to reduce sample sizes. This can be done by using prior mean and variance estimates and by stratifying the population to reduce variation within groups.  
Inherent variability 
We take samples to form estimates of some characteristic of the population
of interest. The variance of that estimate is proportional to the inherent
variability of the population divided by the sample size:
with \(\hat{p}\) denoting the parameter we are trying to estimate. This means that if the variability of the population is large, then we must take many samples. Conversely, a small population variance means we don't have to take as many samples. 

Practicality  Of course the sample size you select must make sense. This is where the tradeoffs usually occur. We want to take enough observations to obtain reasonably precise estimates of the parameters of interest but we also want to do this within a practical resource budget. The important thing is to quantify the risks associated with the chosen sample size.  
Sample size determination 
In summary, the steps involved in estimating a sample size are:


Sampling proportions 
When we are sampling proportions we start with a probability statement
about the desired precision. This is given by:
where z is the ordinate on the Normal curve corresponding to α. 

Example  Let's say we have a new process we want to try. We plan to run the new process and sample the output for yield (good/bad). Our current process has been yielding 65% (p=.65, q=.35). We decide that we want the estimate of the new process yield to be accurate to within δ = .10 at 95% confidence (α = .05, z_{α} = 2). Using the formula above we get a sample size estimate of n=91. Thus, if we draw 91 random parts from the output of the new process and estimate the yield, then we are 95% sure the yield estimate is within .10 of the true process yield.  
Estimating location: relative error 
If we are sampling continuous normally distributed variables, quite
often we are concerned about the relative error of our estimates rather
than the absolute error. The probability statement connecting the desired
precision to the sample size is given by:
where μ is the (unknown) population mean and \(\bar{y}\) is the sample mean. Again, using the normality assumptions we obtain the estimated sample size to be:
with σ^{2} denoting the population variance. 

Estimating location: absolute error 
If instead of relative error, we wish to use absolute error, the equation
for sample size looks alot like the one for the case of proportions:
where σ is the population standard deviation (but in practice is usually replaced by an engineering guesstimate). 

Example  Suppose we want to sample a stable process that deposits a 500 Angstrom film on a semiconductor wafer in order to determine the process mean so that we can set up a control chart on the process. We want to estimate the mean within 10 Angstroms (δ = 10) of the true mean with 95% confidence (α = .05, z_{α} = 2). Our initial guess regarding the variation in the process is that one standard deviation is about 20 Angstroms. This gives a sample size estimate of n = 16. Thus, if we take at least 16 samples from this process and estimate the mean film thickness, we can be 95% sure that the estimate is within 10 angstroms of the true mean value. 