Assessing Product Reliability
8.1.3. What are some common difficulties with reliability data and how are they overcome?
|Failure data is needed to accurately assess and improve reliability - this poses problems when testing highly reliable parts||
When fitting models and estimating failure rates from reliability data, the precision of the estimates (as measured by the width of the confidence intervals) tends to vary inversely with the square root of the number of failures observed - not the number of units on test or the length of the test. In other words, a test where 5 fail out of a total of 10 on test gives more information than a test with 1000 units but only 2 failures.
Since the number of failures \(r\) is critical, and not the sample size \(n\) on test, it becomes increasingly difficult to assess the failure rates of highly reliable components. Parts like memory chips, that in typical use have failure rates measured in parts per million per thousand hours, will have few or no failures when tested for reasonable time periods with affordable sample sizes. This gives little or no information for accomplishing the two primary purposes of reliability testing, namely:
|Testing at much higher than typical stresses can yield failures but models are then needed to relate these back to use stress||How can tests be designed to overcome an
expected lack of failures?
The answer is to make failures occur by testing at much higher stresses than the units would normally see in their intended application. This creates a new problem: how can these failures at higher-than-normal stresses be related to what would be expected to happen over the course of many years at normal use stresses? The models that relate high stress reliability to normal use reliability are called acceleration models.