Information Technology Laboratory
National Institute of Standards and Technology
Gaithersburg, MD 20899, USA
Conformance testing for the Virtual Reality Modeling Language (VRML) provides a means for determining if an implementation satisfies the requirements and specifications of the standard. The National Institute of Standards and Technology (NIST) is developing a VRML test suite (VTS) to systematically address some of the problems posed by the nature of testing three-dimensional graphics. In addition, we will address using the WWW as a vehicle for delivering the VTS system. This paper discusses the test development strategy and design issues in developing the VTS.
Conformance tests capture the technical description of a specification and measure whether a product faithfully implements the specification. The testing provides relevant parties such as, developers, purchasers, or users increased levels of confidence in product quality and increases the probability of successful interoperability.
Standards for graphics and the World Wide Web (WWW) present special challenges for conformance testing. Due to the interactive, inter-networked, 3-dimensional nature of VRML, a testing methodology must address VRML's ability to represent and properly render or capture multimedia-based content, network accessible links to reusable content, and static or animated scenes. A complete VRML test system needs to address these requirements for VRML content, browsers, and authoring tools in a reliable and intuitive manner.
The Information Technology Laboratory (ITL) of the National Institute of Standards and Technology (NIST) of the US Department of Commerce has undertaken the development of a VRML Test System (VTS). In this test suite, we are systematically addressing some of the problems posed by the nature of testing 3-dimensional graphics. In addition, we will address usage of the WWW as a vehicle for delivering the VTS system.
Conformance testing is the process of testing the functionality and correctness of a given implementation against a specification. Typically, this process produces tests that provide an objective measure of conformity. Each test lends itself to providing reproducible, unambiguous, and accurate results. These tests determine whether an implementation performs as required on a pass/fail basis. The test suite therefore emphasizes error detection, not error diagnosis or correction. It is important, however, that the conformance tests are informative about the relationship of the test to the specification. A well-organized test system will provide an intuitive mechanism for verifying that the expected outcome of each test is correctly grounded in the specifications of the standard.
Depending on the type of specification under test, various methodologies can be applied in developing conformance tests. For example, tests for application programming interfaces (API's) are often designed from a functional point of view. In this case, the API, more simply stated as a set of functions, is treated as a black box and therefore, the internal structure of the implementation remains unknown. Tests are written to generate a series of inputs, and verify the outputs for conformance to the specified behavior.
The VRML specification is not written as an API specification, but rather as a metafile language. Similar to programming language standards, both the syntax and semantics of the language are captured within the specification. This type of metafile language specification lends itself to testing in three areas, including metafile syntax, interpreters, and generators. Each of these areas poses a different problem from a conformance testing perspective. As such, in developing the VTS, we will apply a combination of testing methodologies to provide a complete and robust solution.
Admittedly, metafile testing begins with syntax testing, and is best accomplished through the introduction of various types and levels of errors. Instead of pursuing this type of test case generation, we decided to expend our efforts during the first phase in building and releasing to the VRML community a public domain parser. This reference parser is a follow-on to the parser released by Silicon Graphics, Inc. [SGI96]. Using this reference parser, content providers can check the validity of their VRML file syntax and browser and authoring tool developers can use the reference implementation as a starting point for their implementations.
VRML interpreters, more commonly known as browsers, must be able to read syntactically correct files and render them as specified by the minimum requirements of the specification. Conformance testing for VRML browsers suffers from the same problems of indirect and inaccessible effects that were outlined for testing the PHIGS standard [CUGI91]. The VRML standard, unlike programming language standards, is built around a state-machine concept. Many constructs, such as pointing device sensors, do not immediately generate graphical output. Rather, these constructs set an internal state that will later apply when the user points to the geometry that is influenced by a specific pointing device. Thus, some of the VRML language constructs to be tested have effects which are either indirect, or inaccessible to the test program.
The fact that a VRML scene may be static, dynamic, 3-dimensional and/or contain sound, necessitates human visual interpretation. Consider a "dotted line", a "green" box, a "barking" sound, all common-sense vocabulary of human visual and audio perception. Barring exotic technology or extreme measures, we must rely on human operators and their ability to "recognize" these terms. To minimize the subjectivity inherent in testing browsers, careful consideration must be given to the test file design and criteria for evaluating the tests.
VRML generators, more commonly known as authoring tools, must be able to produce syntactically correct metafiles. There is no minimum complexity that must be supported by a conforming VRML authoring tool except that the file must contain the required VRML header. Due to the pressing needs of the VRML community with respect to metafile testing and browser testing, and the limited ITL resources, we decided to put conformance testing of authoring tools on the back burner. In the meantime, we felt that the VRML reference parser could serve as a syntax checker for generated VRML worlds.
By reviewing the testable areas that were apparent from the VRML specification, we developed a model that provided some guidance in the construction of test cases, rather than approaching these categories in an ad hoc fashion. Three major design considerations arose from our review of testing methodologies:
Two essential components in interpreting languages include the ability to syntactically understand and categorize the constructs of a language and the ability to discover the relationships among the constructs. All languages are composed of an alphabet and a grammar. The alphabet consists of a finite set of tokens, which are used in turn, to compose sentences. The process of identifying these tokens is called lexical analysis. These tokens can be succinctly described through the use of regular expressions. The grammar specifies a set of rules that defines how tokens can be used in sentences. The process of recognizing the relationships defined by a grammar is known as parsing. For a complete discussion of language theory, see [BARR79, DENN78].
The VRML parser is built using Flex and Bison, the GNU implementations of Lex and Yacc. Lex takes as input a set of descriptions of possible tokens, and produces as output source code that will identify the defined tokens. This routine is called the lexical analyzer. Yacc takes as input a concise description of the VRML grammar and produces source code that can parse the grammar.
The VRML parser uses a combination of lexical analysis and parsing to check for syntactically correct VRML files. At the time of its release, it did not implement all VRML nodes and it made no attempt to validate the semantic correctness of a VRML metafile. In addition, it was only available through a rudimentary command-line interface. These limitations of the VRML parser provided us with an initial direction. We decided to proceed in the following manner:
A well-known proof technique that is used in conformance testing is based on the axiomatic method. In this technique, we start with a set of axioms and some rules of inference. Axioms are simply premises that we agree to accept as true. The rules of inference specify how the truth of new premises, or theorems, can be logically deduced from the initial axioms and already established theorems. Consider the following two premises: (1) P is true, and (2) P=>Q is true (read as "P implies Q, where P is a set of premises, and Q is a set of conclusions") If the two premises P and the conditional P=>Q are true, then, by direct proof, Q must also be true. This fundamental rule of inference is called modus ponens by logicians, and is used in direct proofs [DENN78]. Conversely, if the inference P=>Q is valid, and the conclusion, Q, is false, then by negative inference, one of the premises, P, relied upon must be false. This latter mode of reasoning forms the basis for testing conformance for VRML browsers.
Testing conformance for VRML browsers can be accomplished via the direct proof method. A conforming implementation can be stated as:
Theorem: For all X, if X is a conforming implementation, then X behaves as defined by the minimum requirements for browsers in the VRML specification.
To logically prove this theorem, assume that P is a conforming implementation and that P=>Q. Then, attempt to find a q, where q is an element of Q, that causes the implementation to behave improperly. If such a case can be found, then we can prove by negative inference that the original assumption must be false. Note however, that this process only proves that an implementation does not conform to the specification. It does not prove that an implementation conforms to the specification in all cases. It is quite possible that the implementation in question may not conform in untested areas of the specification. Even for relatively simple problems, it is computationally infeasible to generate a set of tests that will exhaustively cover the specification.
Direct proof by negative inference works quite nicely in proving mathematical theorems, where a set of premises already exists in axiomatic form. Applying this technique to information processing standards poses the problem of translating a document written in English prose to a set of logical premises. This translation process is accomplished through the creation of semantic requirements, SRs, which play the role of premises. The SRs, in turn, can be used to generate actual test cases, TCs; the TC results play the role of conclusions. The following sections provide a set of guidelines for specifying semantic requirements and generating test cases. This approach was first discussed in [CUGI91], and is discussed here as it applies to the VTS.
The VTS system consists of many components that are organized in both a modular and a hierarchical fashion. The modules correspond to the node groupings as specified in section 5 of the VRML specification [VRML]. We considered using the sequence of node definitions in chronological order, but opted for the node categories, since they will eventually be mapped to subclauses within section 4 concepts. This organization provides a natural grouping, where each module deals primarily with the strongly related requirements of a topic. The basic philosophy in determining ordering among and within these modules is to start with basic capabilities (i.e., geometric primitives with default values) and progressively add complexity. In order to accomplish this ordering, it is necessary to further subdivide a given module into basic and more advanced capabilities.
Each module contains a set of semantic requirements. These semantic requirements attempt to capture behavior that is known to be true that can be used as a premise in our logic-based system. Well-designed SR's should be:
Even given these guidelines, there is no magic formula for specifying SRs for a given module. However, these SRs do provide a clear indication of the requirements for specifying conformance to the standard. The level of understanding which is evident from specifying SRs serves to sharpen any interpretation questions which may emerge. These cases serve as feedback to the standardization process so that inconsistent or incomplete specifications in the standard may be corrected.
An individual test case, TC, is a testable conclusion that is derived from one or more SRs. Each TC should be written so as to explicitly state the behavior of a conforming implementation. In the VTS, these TCs are realized as actual test worlds. Loading the test world into an implementation will generate a pass/fail condition. The result of this process is used in determining conformance to the specification.
Typically, a single SR does not lend itself to a single testable conclusion; rather, there exists a many to many relationship between SRs and TCs; that is, one TC may be derived from many SRs, and each SR may be used to specify several TCs.
It is important to establish a complete chain of inference from the standard to a particular TC. This chain is created by relating a TC back to its SRs and ultimately to specific subclauses within the standard itself. This explicit mapping of TCs back to SRs exhibits to the user the validity of the TC. As was previously stated, passing all of the TCs does not prove that an implementation conforms to the specification. Failure in a particular TC does strictly imply failure to conform.
In past testing efforts, we have had available a mutually agreed upon reference implementation that was used to definitively capture proper behavior. In the case of the VTS, there was an immediate need within the VRML community for conformance test cases. In the interest of time, we decided to use "whatever browser worked best for a given test case" to display and capture a graphical representation of the results of the test. Results are captured as GIF89 images for tests that involved static scenes, and as MPEG movies for tests that involved animated scenes. Although this approach introduced some subjectivity on the part of the test developer, we felt that it was necessary to give the user a graphical representation of the test case results, where possible.
Thus far, we have seen that there are several types of entities involved in our conformance test system, including the specification, semantic requirements, test cases, test worlds, expected results, and actual results. Actual results can be obtained by loading a specific test world into the implementation under test. Users of this system need to have an intuitive and easy-to-use interface to these entities. Moreover, in order to support the logical model set forth for browser conformance testing, there must be a well understand mapping of test cases to semantic requirements, and ultimately, to specific subclauses within the standard itself.
Since the VRML specification has been developed as a hypertext document
on the Web, it is already and will continue to be accessible in electronic
form. Furthermore, the document will contain a unique HTML
Access to the VTS is realized through a graphical user interface (GUI) that serves as a front-end responsible for obtaining input and presenting output. As a user navigates through the system, he is presented with a hierarchically organized view of all VTS modules. Each module contains a set of SRs that serve as reference points to the other entities. A database schema is used to define the relationships among the various entities in the VTS. As the user navigates through the system, he is eventually presented with test worlds. Selecting a test world will cause the implementation under test to load the VRML scene. In addition, the system will automatically display the associated semantic requirements, test case descriptions, expected results and appropriate subclause(s) within the specification. This information will provide the user with the capability to quickly understand the relationship between the specific test case, and its derivation from the standard.
The complexity involved in creating usable conformance tests increases as information processing standards address interdisciplinary applications. The proliferation of multimedia content and network accessible resources adds to this complexity. Nevertheless, by utilizing a variety of testing methodologies, some of which have originated in other computer science disciplines, the testing process can be made reliable, comprehensible, and predominantly automated.
[BARR79] Barrett, William A., John D. Couch, Compiler Construction: Theory and Practice, Science Research Associates, Inc., Chicago, IL, 1979. [BEIZ90] Beizer, Boris. Software Testing Techniques, Second Edition, Van Nostrand Reinhold, New York, NY, 1990. [GERS82] Gersting, Judith L. Mathematical Structures for Computer Science, W. H. Freeman and Company, New York, NY, 1982. [CUGI91] Cuguni, John V. Interactive Conformance Testing for PHIGS, Eurographics 91, edited by F. H. Post and W. Berth, Elsevier Science, New York, NY, 1991. [DENN78] Denning, Peter J., Jack B. Dennis, Joseph E. Qualitz. Machines, Languages, and Computation, Prentice-Hall, Inc., Englewood Cliffs, NJ, 1978. [SGI96] VRML 2.0 Parser, Silicon Graphics, Inc., [http://vrml.sgi.com/]. [VRML] Virtual Reality Modeling Language Specification, ISO/IEC WD 14772:1996, International Organization for Standardization, 1996.