SED navigation bar go to NIST home page SED Home Page SED Contacts SED Projects SED Products and Publications Search SED Pages
Invited Session: Statistics in Information Technology

Invited Session: Statistics in Information Technology

Organizer: Gordon Lyon, NIST
Session Chair: Jon Kettenring, Bellcore
 

Network Traffic Self-Similarity & the World Wide Web

Mark E. Crovella
Dept. of Computer Science, Boston Univ.

Recently, the fractal-like property ofem self-similarity has been found in time-series distributions for wide and local-area computer network traffic measurements. Since an entity with self-similarity appears unchanged despite a wide range of viewing scales, this discovery has serious implications for the performance modeling and evaluation of networks and network protocols. The mechanisms that give rise to self-similar network traffic are illustrated nicely by a common yet representative example: traffic on the World Wide Web (WWW). I shall discuss empirically obtained distributions both from our own traces and from other data collected independently at over thirty WWW sites. These records of actual user executions of NCSA Mosaic give strong evidence that WWW traffic is self-similar. The fractal-like nature originates in underlying distributions of WWW document sizes, effects of caching and user preference in file transfer, the effect of user "think time" and superimposed transfers in a local-area network.

[Mark E. Crovella, Dept. of Computer Science, Boston Univ., Boston, MA USA; crovella@cs.bu.edu. ]

 

Lessons Learned in Developing & Applying Software Reliability & Metrics Models: NASA Space Shuttle Example

Norman F. Schneidewind
Dept. of Information Sciences, Naval Postgraduate School

On the NASA Space Shuttle software project, we learned that remaining failures, total failures, test time required to attain a given fraction of remaining failures, and time to next failure are useful reliability metrics for: 1) providing confidence that the software has achieved reliability goals; 2) rationalizing how long to test a piece of software; and 3) analyzing the risk of not achieving remaining failure and time to next failure goals. Having predictions of the extent that the software is not fault free (remaining failures) and whether it is likely to survive a mission (time to next failure) provide criteria for assessing the risk of deploying the software. Furthermore, fraction of remaining failures can be used as both a program quality goal in predicting test time requirements and, conversely, as an indicator of program quality as a function of test time expended.

[Norman F. Schneidewind, Dept. of Information Sciences, Naval Postgraduate School, Monterey, CA 93943-5000 USA; schneidewind@nps.navy.mil ]

Date created: 6/5/2001
Last updated: 6/21/2001
Please email comments on this WWW page to sedwww@cam.nist.gov.