Home | Proceedings


What does getting WET (Web Event-logging Tool)
Mean for Web Usability?

Michael Etgen & Judy Cantor
etgen@att.com,   jcantor@att.com
User Experience Engineering Division
AT&T Labs
Middletown, NJ, USA

 

Approaches to Usability Data Collection on the Web

Usability data collection for web sites and applications has proven to be a challenging task, with many usability engineers craving a tool that makes the process easy, automated, and useful. There are several approaches that are used to gather web usability information, but all have some important disadvantages. The following is a brief discussion of some of the approaches available in terms of their advantages and disadvantages.

The traditional technique for capturing usability data is hand-coding interactions in person or from videotape. The advantages of hand-coding are that the usability engineer can see exaclty what the user is doing, ask questions, and view subjective reactions. The disadvantages are that it can be expensive in terms of time and money, may require a full lab setup, and can introduce human error into the logging process. Unfortunately, web projects usually leave little time for carrying out the traditional usability testing activities that provide the most useful data. Therefore, those concerned with usability have often turned to other techniques for acquiring usability information.

Server Logs

One of the most popular ways to get web usability data quickly is to examine the logs that are saved on servers. The server generates an entry in the log file each time it receives a request from a client. The kinds of data that it logs are:

The benefit of server logs is that they are automatic and inexpensive. The are some important disadvantages to server logs though. First, servers cannot collect data on some potentially crucial user interactions that occur on the client-side only (e.g., within-page anchor links, form element interactions, java applets, etc.). A second related point is that the validity of the data is highly suspect because of caching by proxy servers and browsers and dynamic IP addressing. Caching refers to the functionality built into browsers and proxy servers that allows them to store web pages that are frequently accessed, which cuts down on web congestion. However, this means that the server will not know when a page is requested that has been cached because it does not receive the request. Dynamic IP addressing is the practice of assigning a client a potentially different IP address each time it accesses the internet, which will give misleading data to the server about the identity of users visiting site (could be the same person or another or many others for the same IP address).

Automated Data Collection Software

A technique that has been commercially available for some time is to run software on the client operating system in addition to the browser while testing [1], or altering the browser software itself to enable some automated data collection [2]. The main benefit of this technique includes the automatic collection and storage of some client-side usability information, such as all URL requests and browser menu selections. There are many disadvantages though, including the necessity of installing the special software on each users' computer, the cost of developing the software or buying it commercially, difficulties in retrieving the data once collected, and the lack of flexibility inherent in using special software that is not platform independent.

WebVIP from NIST

In order to overcome the lack of client-side data collection with server logs and the problems with automated operating system software, researchers at NIST have developed a tool (WebVIP) which copies an entire web site and adds identifying and event handling code to the HTML "links" on the site [3]. This approach represents a real departure from server logs and special software. It can be used on many browsers because the data logging capability lies within the page's HTML, and so is not tied to specific operating systems on the client. WebVIP also has access to page-specific content because it is embedded within the HTML itself. Finally, WebVIP is available without cost to the usability engineer. Still, WebVIP has some important disadvantages. First, it can be potentially very difficult to instrument a site of mid to large size as the usability engineer must oversee the altering of the event handling code for each individual link on each web page. Second, copying entire web sites for instrumentation often leads to invalid path specifications and difficulties getting the copied site to function properly, which is a general problem with copying complex interactive sites and not peculiar to WebVIP. Third, WebVIP seems to lack the ability to collect data on HTML objects other than standard HTML links, such as form elements, which is a potential problem for path logging if navigation is executed by elements other than HTML links. Finally, the direct altering of the event handling code within the tags of each link also has the potential to interfere with the functioning of the site if it already uses inline event handling on links (as many sites do today).

Web Event Logging

As an alternative to the above techniques, we developed the Web Event-logging Tool (WET). WET embodies a technique that overcomes many of the important limitations of other usability data collection methods by taking advantage of the global event handling capabilites built into Netscape and Microsoft browsers:

Browser Events and WET

In general, browsers are applications that run on a client, send requests to servers, and receive responses from the server in the form of documents. The browser displays the content and provides interactivity with the document by way of events, which are generated when a user manipulates the elements on a web page (they can also generated in cases in which the user does not directly spawn them too). The way that events flow through a browser environment is called the browser's event model. The two most popular browsers have different event models (Netscape Communicator events "trickle down" while Microsoft IE events "bubble up"), however they both deal similarly with a common set of events for the interactions that occur on a web page (see Table 1).

Table 1. Events supported in NS Communicator and MS IE4

Event Source Objects
abort image
blur window, text, textarea, password, select
change text, textarea, select
click link, area, button, radio, checkbox, reset, submit
dblclick link
error window, image
focus window, text, textarea, password, select
keydown text, textarea, password
keypress text, textarea, password
keyup text, textarea, password
load window, image
mousedown link, button, radio, checkbox, reset, submit
mouseout link, area
mouseover link, area
mouseup link, button, radio, checkbox, reset, submit
move window
reset form
resize window
select text, textarea, password
submit form
unload window

When events are triggered, the browser creates an event object that contains a great deal of information about the event and the "source" which triggered the event. As one can see from Table 1, the sources of events are generally the interactive elements on web pages. Using global event handling functions assigned at the window and document level, WET reads certain properties of the event objects in both browsers and records those properties along with time stamps and document-window location, therefore giving a rather complete view of an individual user's interaction with a web site or application (see Table 2).

Table 2. Communicator and IE4 event object properties [4]

Communicator Property description Internet Explorer 4
Property Values Values Property
modifiers Event 
object 
properties
Modifier keys pressed when the event occurred Boolean altKey 
ctrlKey 
shiftKey
pageX pixel count Horizontal coordinate of event in content region of browser window pixel count clientX
pageY pixel count Vertical coordinate of event in content region of browser window pixel count clientY
screenX pixel count Horizontal coordinate of event relative to entire screen pixel count screenX
screenY pixel count Vertical coordinate of event relative to entire screen pixel count screenY
target object Object that is to receive, or that fired, the event object srcElement
target.type source type String value of event source type (e.g., checkbox, text, radio, button) source type srcElement.type
target.name source name String value of event source name (e.g., "mycheckbox#1") source name srcElement.name
target.value source value String value of event source value source value srcElement.value
target.href source link destination Specified URL for link objects source link destination srcElement.href
type event name String value of event name (e.g., "click", "mousedown", "keypress") event name type
which integer Mouse button or keyboard key code (but some code values differ with browser) integer button 
keyCode

After WET reads the event properties it stores them in a log file. As an example, a brief log entry generated by WET could look like the following in which the user loads a page, clicks a checkbox in a form, and then submits it. The order of items in the log is event time (date can be added if desired), event type, source type - source name - source value (for form elements only), and event/source location.

10:48.35.528, load, /orderforms/order_form.html
10:48.39.384, click, checkbox - Peanut Butter - Skippy, /orderforms/order_form.html
10:48.45.932, click, submit - Submit - Submit Order, /orderforms/order_form.html
10:48.45.978, unload, /orderforms/order_form.html

The storage and retrieval of the log generated by WET can be handled in many different ways and depends largely upon the structure and technologies used by the web site or application. So far though, it appears that WET can be adapted and implemented within the common structures and technologies in use on the web today (e.g., layers, frames, DHTML, ASP, CGI, ColdFusion, SSI, etc.). In general, it does not matter if the pages on the site are dynamically generated or are static, if they contain HTML then the interactions with them can be logged. More detail on a specific implementation of WET will be provided in the next section.

The collection and analyses of data vary by the usability engineers' goals. The usability engineer may wish to collect detailed data on all interactions with the elements of the user interface, which would generate an enormous amount of data. It is possible to log all of the events mentioned in Table 1. In fact, it is important that the usability engineer understand that events are often generated in groups for what is perceived as a "single" interaction. For example, when I click on a link in web page, the following events are all generated based upon that seemingly "single" interaction with the link: mouseover, mousedown, mouseup, and click. Thus, the practical approach to take is to tailor WET so that it will attend to certain aspects of the user's interaction, like clicks on objects, changes to objects, mouseovers on navigation items, and page loads. The usability engineer may have some specific issues in mind for which they wish to gather user data, and so could track a subset of the events that occur or could track events only upon certain pages, or even upon certain object designs. Basically, if you can describe what you wish to include/exclude in the form of an IF-THEN statement, then WET can be adapated to your needs.

Implementation of WET in a Web Application

Background

In a current project, a web application was designed for the top Technical Executive Officers and their staff in our company. It was not possible to get these people to sit down for a traditional usability test while someone was watching their actions, and scheduling anything formal was nearly impossible. But it was possible to send some subset of users a special login and password, and request that they complete a set of predefined tasks.

We envisioned that the primary use of WET in the web usability process would be within the context of usability testing, whether it be formal or informal, in a lab or remote. The nice feature of a tool like WET is that the usability expert does not have to be taping, watching, or even in the same room as a user in order to collect data. In any usability study that includes usability goals and objectives, users scenarios, and models of users interactions, WET can simply be the data collection method. As was mentioned above though, it is important that the tester have a good idea about which events/pages/elements for which they would like data, otherwise the amount of data and the analysis would be overwhelming. For the executive web application project, we decided to utilize WET for gathering usability data from some users in a (temporally and physically) remote usability test study.

The web application is essentially used as a planning and coordination tool for an Executive Council. It incorporates the following basic functionality:

Behind the application is an MS Access database where the information generated by the executives is stored. Interaction with the database is provided by way of ColdFusion, which is software that runs concurrently with the web server. CFML (ColdFusion Markup Language) tags are embedded in the web pages and work much like Server Side Includes (SSI), that is, they are interpreted at the server and not sent to the client.

Configuring WET

As a first step, WET requires that the usability engineer specify the events and properties that they wish to log, and how they will be stored and retrieved. For this project, we decided to track the following events: clicks, changes, loads, and mouseovers. These basic events provided a thorough record of how the users interacted with the elements on the pages in the application. The event model used by Communicator demands that javascript be written to "capture" the events of interest at the window, document, or layer levels, and then assign them to an event handling function. In the case of WET, the event handling function logs the usability data from the event at the level of interest (usually window and/or document), then "releases" the event so that whatever actions are supposed to occur for the source of the event will occur (e.g., link requests a URL, checkbox shows a check, etc.). For Internet Explorer, there is only a single event object for the browser whose properties dynamically change according to the interactions with the page. To read the properties of the event object, one must merely assign an event handler to the event at the window, document, or layer level as is the case with Communicator, but the event need not be "captured" or "released". So in the example blocks of code below, the first block is for Communicator only, while the second block is necessary for both Communicator and Internet Explorer.

if (navigator.appName=="Netscape") {
document.captureEvents(Event.CLICK | Event.CHANGE | Event.MOUSEOVER);
window.captureEvents(Event.LOAD); }
...
This top block checks to see if browser is Netscape, and if so it executes the event capturing code.
****** event handling functions ******  
...
document.onclickclickHandler;
document.onchangechangeHandler;
document.onmouseovermouseoverHandler;
window.onloadloadHandler;
This bottom block executes for both browsers, assigning the particular event handling function for each event type listed at a particular level.

After the events to be logged have been specified, that code and the set of event handling functions that do the logging must be placed in an external text file on the web server and saved with a "js" extension (e.g., WET.js). A single call to the WET.js file is then inserted anywhere withiin the HEAD tags of each document that is to be logged:

<HEAD>
...
<SCRIPT LANGUAGE="JavaScript" SRC="WET.js"></SCRIPT>
...
</HEAD>

This is the common method for making a set of javascript functions available to all pages on a site without rewriting them in each document. This technique of gathering usability data therefore allows access to content-specific information about the elements that are interacted with on the page, while requiring minimal modularized changes to the code of the pages themselves (i.e., logging code is easily inserted and removed).

The storage method used for this project entailed temporarily writing task log data to session-only cookies (i.e., they are deleted when the user leaves the site). As each cookie approached its' size limit (~4K), WET would begin writing to another cookie so that no data would be lost. Though cookies were used in this project, other nonobstrusive (to the developers and users) options are available for storing the log data as it accrues, however each has its own set of difficulties. The problem of persistence in the "stateless" web environment and its solutions have been discussed elsewhere [4, 5], and so will not be further discussed here.

In order to provide "turn on/turn off" capability (i.e., WET becomes active only for certain users or certain situations), the external javascript file call was encapsulated within a CFML tag that checked the users login name before writing the call to the WET.js file into the page. If the user was a usability test participant, the CFML wrote the WET.js file call into the document before being sent to the client, and if not a participant the call was not written. Though not a necessary part of WET, code was also added which displayed start/stop links, and that code was included within a CFML tag that was the same as that previously mentioned but located within the <BODY> tag of the documents. The start/stop links were implemented using layers, and were coded to dynamically move so that they would never be removed from the upper right corner of the document by scrolling or page resizing (like the GeoCities icon on http://www.geocities.com). The stop button was also used to send log data back to the database and clear the log storage cookies.

Overall, configuring WET to log usability data for the executive web application project generally required the insertion of some CFML conditional statements which were wrapped around calls to external javascript files. Altogether this meant that two very similar modular blocks of code were inserted at the top (in head tags) and bottom (just above closing body tag) of the pages, no matter what the overall page size or contents. The addition of this code to the web pages was not difficult to do by hand for the current project (copy and paste the code), however the same task may seem quite daunting for web sites and applications of any significant size. There are two ways to overcome this. One way is to write a CGI script that (given the correct permissions) will open each file, place the appropriate script in the appropriate places, and then close the file. Or, use Allaire HomeSite to perform an automatic, extended find and replace on each page in the site. For instance, find the closing head tag ("</HEAD>") and replace that with the call to the WET.js file and the same piece of the head tag ("<SCRIPT LANGUAGE="JavaScript" SRC="WET.js"></SCRIPT></HEAD>").

So far, we have found that configuring WET in any site or application seems to require some tinkering because of usability tester preferences and site architecture. It may be the case though that there are some general rules that can be used to decide how to configure WET for different types of sites or applications. More experience with implementing WET must be acquired before those rules can be documented. In general though, the following high-level steps are necessary for setting up WET in a web site or application.

  1. Specify events to log and the logging method (e.g., cookies, parent window/frame variable, hidden form element) in an external javascript file
  2. Place external javascript file on web server
  3. Insert call to external javascript file in head tags of web pages
  4. Add code for log retrieval method, which is often intricately tied to the logging method

Task List and Metrics

The tasks that we supplied to the usabiltiy test participants covered the basic functionality of the application and were as follows:

  1. See what is on the agenda for the next meeting, and find a description of the agenda item for the 2:00 - 3:00 PM slot.
  2. Identify when the next meeting is after May 10.
  3. For the July meeting, you will be out of town on business. Please find a way to provide that information in this website.
  4. Do you have any open action items for the next meeting? Do you have any actions items at all for this council?
  5. You wish to to talk about "The Growth of the Web" at a meeting in the future. Request this topic for the meeting on June 10, 1999 with the following title and description: "The Growth of the Web" " Discuss the impact of the Growth of the Web on Our Business."
  6. You remembered something interesting from the February meeting. Find what topics were on the agenda for the meeting on February 12, 1999.
  7. Contact a specific member through e-mail with an indication that you have completed the usability test.

Expert users with a great deal of experience using the application performed each of the above tasks to provide a benchmark for usability objectives. The actions performed by the expert users as they completed the task were also used to create an ideal interaction model for each of the tasks. Based upon the expert users' performance, the following usability metrics and objectives were identified for each task:

Results

The comparison of the testing data to the usability objectives is presented in the table below for each task. We recieved complete data from a total of 4 people during the testing period:

Metrics Objectives and Benchmarks Usability Participants' Results Compared to Benchmarks Satisfy Objective?
Task completion
Task 1
Task 2
Task 3
Task 4
Task 5
Task 6
Task 7
75% complete task for all tasks 100% completed task
100% completed task
75% completed task
50% completed task
100% completed task
75% completed task
100% completed task
yes
yes
yes
no
yes
yes
yes
Time to complete task*
Task 1
Task 2
Task 3
Task 4
Task 5
Task 6
Task 7
75% less than 20 seconds
75% less than 25 seconds
75% less than 45 seconds
75% less than 35 seconds
75% less than 90 seconds
75% less than 30 seconds
75% less than 120 seconds
100% completed task in time
75% completed task in time
100% completed task in time**
0% completed task in time***
75% completed task in time
33% completed task in time**
75% completed task in time
yes
yes
no
no
yes
no
yes
Deviation from ideal model
Task 1
Task 2
Task 3
Task 4
Task 5
Task 6
Task 7
75% have no more than 2 deviations for all tasks 100% had no more than 2 deviations
100% had no more than 2 deviations
75% had no more than 2 deviations
50% had no more than 2 deviations
100% had no more than 2 deviations
75% had no more than 2 deviations
100% had no more than 2 deviations
yes
yes
yes
no
yes
yes
yes

* The Time to complete task metric calculation was revised due to inconsistent clicking of the start link, the mouseover event on the first link followed off the homepage was used as the starting point, producing a slightly liberal time estimate.
** The total number of participants upon which the calculation was based was 3.

*** The total number of participants upon which the calculation was based was 2.

Though the total number of participants was small, the results showed that the users had problems with some of the tasks. Task 4 seemed especially troublesome, as the usability objectives for that task failed to be satisfied for each metric. One of the main reasons for participant trouble with task 4 was likely the fact that the default settings for the action item page displayed only the open action items (which is not necessarily a poor design decision). In order to see all action items the user must select a radio button for all action items and then reload the page by way of a submit button. Apparently the interpretation of the default status and the comparison of that to the necessary status for completing the task (i.e., view all action items, not just open ones) led users to take greater time than should have been required and also led some to pursue erroneous interactions or incorrectly indicate that they had completed the task.

Tasks 6 was also problematic for users, but mostly in terms of the time necessary to complete the task. This may have been due to the fact that participants needed more time to translate the task description into the appropriate paths from the home page (i.e., they needed to think about the fact that they topics from past meetings were displayed on a "minutes and presentations" page). Once they reached the "minutes and presentations" page, they may have made an additional pause as they were presented with two independent but equally viable paths to the data (i.e., one an "archives" link and the other a direct link to the meeting description). Thus, the participants may have not detected a clear path to the information when they reached each juncture, and so paused as they considered their options.

The design of the Executive Council web application seemed to hold up well to the testing overall, as most subjects were able to complete most tasks, almost as quickly as the expert users, and without much deviation from ideal paths. Even though Task 4 results showed some difficulties, the design decision to show only open action items as default is probably still the most advantageous. Closed action items may need to be documented and archived in the database, but certainly the most important action items for the next or future meetings will be the open ones.

Overall, WET was able to provide log data that uncovered specific usability difficulties, and in the most austere of circumstances (i.e., physically and temporally remote usability test). Since the logs for each participant were analyzed by hand, we were able to infer based upon the order and presence of events in the log if there was any likely data loss from the logging process. Out of the total number of events logged (approximately 600 or so across all participants), there was only one instance were we could infer that data was probably lost (a load event seemed to not be logged). However, that particular user had some technical difficulties which had predictable effects that were observed in the data in most cases, and it was unclear if the missed load event was due to data loss or a those technical difficulties.

Lessons Learned

As with most tools and techniques in early stages of development, we learned a great deal about the adaptation and implementation of WET for use on real projects doing real usability testing. Some problems we experienced were inherent to situations where the usability testing is quickly-prepared, informal, and remote. For instance, the test directions did not clearly indicate that the home page of the site was the second page reached after logging in, and so several participants were confused that they did not see "start/stop" links when they reached the post-login page. This problem was quickly remedied by an addendum to the original task directions, but would have been avoided with more time for preparation and if the testing would have been temporally contiguous.

Another problem was related to the actual design of part of the testing appartus, the start/stop links. They were prominently displayed and followed the user as they scrolled the window, and users did not object to the placement and scrolling behavior. However, most users complained that they could not remember whether they had clicked the start link at the beginning of the task after becoming involved in it. The links did not change in appearance after being clicked, so no design aid was given to help them determine whether they had in fact remembered to click the start link as they should. In general, most users failed to remember to click the start link as they began their tasks even after copious reminders throughout the task document. Curiously though, the opposite was true for the stop link, as the vast majority did remember to click the stop link promptly upon fnishing a task. Thus, future tests (especially remote tests) may need to provide even more robust visual cues for the presence and status of the start element, and somehow provide better affordance for clicking it at the beginning of tasks. One potential solution may be to display the start link prominently in the center of the page over the existing site display so that the user would have difficult avoiding the link when they wish to start a task.

A final problem was related to the temporary log storage method used for the application. As noted above, we initially used session-only cookies to store the data for each task. Unfortunately, a prior brief inspection of the application code did not reveal that the developers had also extensively used session-only cookies to handle small bits of login authorization information. WET was therefore only allowed the use of about 2 cookies for logging each task (only 20 cookies are allowed per domain). Without the logging of mouseover events, 2 cookies would probably have been enough. However, we had decided to log mousever events because we suspected that they might provide some interesting data. We specified mouseover logging after the initial testing of WET in the application (in which all data collection seemed to be in good order), and so we were unprepared for the fact that many participants were automatically logged out of the application in the middle of testing as WET began to overwrite the session-only cookies that stored authorization information. A quick fix was created which utilized a nonvisible frameset and a persistent frame variable, however the temporary halt in testing eliminated several users from participation. Thus, the tester must know or be able to find out about the use of certain things like cookies in order to be sure that data collection will not be interupted and/or application functioning will not be disturbed.

Related Work and Future Directions

Others have utilized event logging in other non-web software systems that incorporate events, such as in Macintosh and X Windows software. Balbo and colleagues [6] described an approach to automated usability evaluations that incorporated capturing user behavior through system events, analyzing the event logs for behavioral patterns that signifiy usabiltiy problems, and comparing the event logs to formalised task models. That work detailed an ambitious approach to automated usability testing, but the detection of informative behavioral patterns in complex systems has proven difficult [7]. Though WET could possilby strive to incorporate such intelligence in the future, there is currenlty no code written equipping WET to do so. For the immediate future, WET has been considered an additional tool and technique in the usability tester's arsernal for simply collecting behavioral data during usability testing so that the tester doesn't need to.

One future direction that becomes apparent is to apply the technique embodied by WET to other web technologies that use events, such as Java. We have some preliminary ideas on how WET can be used to log events from java applets and then communicate the log data back to the web page where it resides. To the point that the configuration of WET can be generalized across sites and applications, we would also like to develop a wizard that will help the usability engineer configure WET for their specific situation. Finally, we are also considering the development of a log analysis tool that can summarize and analyze the data for the usability engineer in many ways.

Conclusion

In conclusion, WET provides the usability engineer with a simple method of collecting web usability data that is automated, customizable, and comprehensive. WET was able to provide interpretable logs of user interactions for the testing the executive web application. Usability problems with specific pages and elements on the pages became evident as the logs were examined and compared to the usability objectives we had established. Though WET shows a great deal of promise, it is in early stages of development and so needs further testing in order to evolve into the lithe and helpful usability tool that it could be. Usability engineers must be practical in their approach to data collection too, as WET could potentially gather much more data than is necessary or useful. As we found, including mouseover events, though potentially interesting, caused some unforseen problems. Finally, it must be stressed that WET simply collects usage data. It does not capture the more subjective elements of user interaction such as motivations, frustrations, satisfaction, and all of the important aspects that can never be logged by the computer. The ideal usage model for WET would be as a method for relieving the usability engineer of the tedious interaction logging during or after usability testing in a lab or on the road. However, it can also be useful in the imperfect world where users can be hard to come by when you need them, schedules are tight, and sometimes the only way to get data is to quickly put together and send an e-mail message to a few people with some tasks to try out on your site. WET can provide the detailed usability data in all types of usabilty testing, from the highly controlled formal testing in a lab to the much less controlled temporally and physically remote testing described in this paper.

References

  1. Choo, C.W., Detlor, B., & Turnbull, D. A behavioral model of information seeking on the web: Preliminary results of a study of how managers and IT specialists use the web. 1998 ASIS Annual Meeting.
  2. Taushcer, L. Evaluating history mechanisms: An empirical study of reuse patterns in world wide web navigation. MSc Thesis, Department of Computer Science, University of Calgary, Alberta, Canada. May 1996.
  3. GeoScroll: http://www.bratta.com/dhtml/scripts.html
  4. Goodman, D. Client-side persistence without cookies. http://developer.netscape.com/viewsource/goodman_nocookies/goodman_nocookies.html.
  5. Goodman, D. Cookie recipes: Client-side persistent data. http://developer.netscape.com/viewsource/archive/goodman_cookies.html

  6. Balbo, S., Coutaz, J., & Salber, D. Towards Automatic Evaluation of Multimodal User Interfaces. Session 7: Design & Evaluation, Proceedings of the 1993 International Workshop on Intelligent User Interfaces 1993 p.201-208.

  7. Tam, R. C-M., Maulsby, D., & Puerta, A.R. U-TEL: A Tool for Eliciting User Task Models from Domain Experts Tasks and Usage. Proceedings of the 1998 International Conference on Intelligent User Interfaces 1998 p.77-80

"What does getting WET (Web Event-logging Tool) Mean for Web Usability?"
<- Back

Thanks to our conference sponsors:
A T and T Laboratories
ORACLE Corporation  
National Association of Securities Dealers, Inc.

Thanks to our conference event sponsor:

Bell Atlantic


Site Created: Dec. 12, 1998
Last Updated: June 10, 1999
Contact hfweb@nist.gov with corrections.