4.
Process Modeling
4.2. Underlying Assumptions for Process Modeling 4.2.1. What are the typical underlying assumptions in process modeling?


"Statistical" Implies Random Variation  The most basic assumption inherent to all statistical methods for process modeling is that the process to be described is actually a statistical process. This assumption seems so obvious that it is sometimes overlooked by analysts immersed in the details of a process or in a rush to uncover information of interest from an exciting new data set. However, in order to successfully model a process using statistical methods, it must include random variation. Random variation is what makes the process statistical rather than purely deterministic.  
Role of Random Variation  The overall goal of all statistical procedures, including those designed for process modeling, is to enable valid conclusions to be drawn from noisy data. As a result, statistical procedures are designed to compare apparent effects found in a data set to the noise in the data in order to determine whether the effects are more likely to be caused by a repeatable underlying phenomenon of some sort or by fluctuations in the data that happened by chance. Thus the random variation in the process serves as a baseline for drawing conclusions about the nature of the deterministic part of the process. If there were no random noise in the process, then conclusions based on statistical methods would no longer make sense or be appropriate.  
This Assumption Usually Valid 
Fortunately this assumption is valid for most physical processes.
There will be random error in the measurements almost any time things need to be measured.
In fact, there are often other sources of random error, over and above
measurement error, in complex, reallife processes. However, examples of
nonstatistical processes include


Distinguishing Process Types  One sure indicator that a process is statistical is if repeated observations of the process response under a particular fixed condition yields different results. The converse, repeated observations of the process response always yielding the same value, is not a sure indication of a nonstatistical process, however. For example, in some types of computations in which complex numerical methods are used to approximate the solutions of theoretical equations, the results of a computation might deviate from the true solution in an essentially random way because of the interactions of roundoff errors, multiple levels of approximation, stopping rules, and other sources of error. Even so, the result of the computation might be the same each time it is repeated because all of the initial conditions of the calculation are reset to the same values each time the calculation is made. As a result, scientific or engineering knowledge of the process must also always be used to determine whether or not a given process is statistical. 