<%@LANGUAGE="JAVASCRIPT" CODEPAGE="65001"%> AVSS 2009 Multiple Camera Tracking Challenge
Information Technology Lab, Information Access Division NIST: National Institute of Standards and Technology


  • Multimodal Group Home
  • Benchmark Tests
  • Tools
  • Test Beds
  • Publications
  • Links
  • Contacts
  • AVSS 2009 Multi-Camera Tracking Challenge

    CPNI logo HOSDB Logo

    NIST Logo

    Multimodal Information Group

    The 6th Advanced Video and Signal Based Surveillance (AVSS) IEEE Conference is sponsoring the Multiple Camera Person Tracking Challenge Evaluation in conjunction with the Home Office Scientific Development Branch (HOSDB), Centre for the Protection of National Infrastructure (CPNI), and the National Institute of Standards and Technology (NIST).

    The goal of the effort is to facilitate research via a common evaluation task that focuses on one aspect of person tracking technologies: the ability to track a specified person within a video sensor field using a small set of in situ exemplar video images to specify the person. We refer to these technologies as Single Person Tracking (SPT) technologies. The results of the evaluation were discussed at a special session during the 2009 AVSS conference where NIST delivered the 2009 AVSS Multiple Camera Person Tracking Results Summary presentation.

    Evaluation Tasks and Evaluation Plan

    The evaluation will supports three evaluation tasks: Multi-Camera Single Person Tracking (MCSPT), Single Camera Single Person Tracking (SCSPT), and Camera Pair Single Person Tracking (CPSPT). The first task, MCSPT, is the compulsory task that all participants must build systems to address. The latter two are voluntary, contrastive evaluation tasks designed to assess factors affecting performance of single person tracking systems.

    Version 2 of the 2009 AVSS Multiple Camera Person Tracking Evaluation Plan describes the evaluation tasks in more detail.

    To facilitate discussion of the data set, we are using the AVSS Scoring Schematic to discuss the annotations and their use in the the evaluation.

    Teleconference Minutes

    Evaluation Tools

    Evaluation submissions will be scored using the Framework for Detection Evaluations (F4DE) toolkit downloadable from the MIG Tools Web Page.

    Evaluation Cookbook

    The procedure for participating in the AVSS "dry run" is as follows:

    • Acquire the data by completing the data licenses as prescribed on "Training and Evaluation Data Sets" section below. You will receive a username and password to access the data.
    • Download the latest F4DE software release from http://www.itl.nist.gov/iad/mig/tools/
    • Download the latest annotation release from http://www.itl.nist.gov/iad/mig/tests/avss/2009/iLIDSData/ (you will need your iLIDS username/password)
    • Download video data from web (NIST will send the URL to the data. This uses your iLIDS username/password) or send a 1TB disk to HOSDB.
      • The MCTTR data set consists of 12 days of data collection. The data is divided up into single day chunks for easier download. Within a day's worth of data is a set of excerpts. Each excerpt has been annotated for a single person track.
      • The MCTTR-extra.tar file is the textual data for the corpus.
      • 8 of the 12 days of data are designated development resources. They are: MCTTR{02,03,04,06,07,09,10,12}
      • 4 of the 12 days of data are designated evaluation resources. They are: MCTTR{01,05,08,11}
      • The multiple camera tracking Dry Run is specified using MCTTR02 data, so get that one first.
    • Complete an optional Dry Run Evaluation.
      • A 'Dry Run' evaluation is an opportunity to shake down the evaluation process at your site by running a small validation set at your site and sending the system outputs to NIST for scoring. The process is not geared towards testing performance, rather it is successfully completed by both the site and NIST successfully scoring the dry run test and generating the same results.
      • Step 1: Refer to the Experiment Control Files (ECFs) included in the annotation release:
        • multiple camera tracking: expt_2009_MCSPT_DRYRUN09_ENG_NIST_1.xml
        • camera pair tracking: expt_2009_CPSPT_DRYRUN_ENG_NIST_1.xml
        • single camera tracking: expt_2009_SCSPT_DRYRUN_ENG_NIST_1.xml
          • These define the temporal extent of when to track the person, and the target tracking frames. It also points to template system output files for the system to "add" system output into. (See the eval plan for ECF documentation.)
      • Step 2: Run experimental system(s) on the video intervals specified in the ECFs.
      • Step 3: Package system outputs in ViPER XML format and make sure they validate (refer to the AVSS "scoring primer" -- part of the F4DE software, AVSS09/doc/AVSSScoringPrimer.html -- for details).
      • Step 4: Score your system as prescribed in the "scoring primer".
      • Step 5: Prepare a tar/gzip of all outputs as described in the evaluation plan and submit the archive via FTP.
      • Step 6: Notify jonathan.fiscus@nist.gov, martial.michel@nist.gov
      • Step 7: NIST sends a scoring report to the site for verification
    • Complete the Formal Evaluation
      • Refer to the Experiment Control Files (ECFs) included in the testing annotation release:
        • multiple camera tracking: expt_2009_MCSPT_EVAL09_ENG_NIST_1.xml
        • camera pair tracking: expt_2009_CPSPT_EVAL09_ENG_NIST_1.xml
        • single camera tracking: expt_2009_SCSPT_EVAL09_ENG_NIST_1.xml
          • These define the temporal extent of when to track the person, and the target tracking frames. It also points to template system output files for the system to "add" system output into. (See the eval plan for ECF documentation.)
      • Run experimental system(s) on the video intervals specified in the ECFs
      • Package system outputs in ViPER XML format and make sure they validate (refer to the AVSS "scoring primer" -- part of the F4DE software, AVSS09/doc/AVSSScoringPrimer.html -- for details)
      • Prepare a tar/gzip of all outputs as described in the evaluation plan and submit the archive via FTP
      • Notify jonathan.fiscus@nist.gov, martial.michel@nist.gov
      • Extended Evaluation Submissions are due January 27, 2009

     

    Training and Evaluation Data Sets

    i-LIDS Logo

    The data for the evaluation will come from the i-LIDS MCTTR data set. The MCTTR data set will be divided up into two balanced subsets: a training set and an evaluation test set. The partitioning will be defined at a later date. The MCTTR data set has been distributed to some potential participants as part of the i-LIDS project. Prospective participants that have used the data already must declare the extent of their use of the evaluation subset and if possible train systems for the evaluation that excludes the evaluation subset for training. The video data will be distributed as a single release, (i.e., including both the training and evaluation test subsets.) Participants must take care not to use the evaluation test set until the appointed time.

    The i-LIDS MCTTR data set is a real-world, frame-synchornized, 5-camera data set. The previous link shows the views of the 5 cameras and the camera layout schematic shows the relative camera position and overlap. The example camera views include the images of a ROTAKIN calibration target. These ROTAKIN images provide the only intrinsic and extrinsic calibration material available.

    The i-LIDS MCTTR data set is available at no cost as part of the public Multiple Camera Tracking data sets published by HOSDB. The complete data set may be obtained from HOSDB and the instructions of how to do this are provided on the i-LIDS website at http://www.ilids.co.uk.

    To obtain the MPEG-2 video from NIST, participants must submit an End User License Agreement to the UK Home Office Scientific Development Branch (HOSDB), after which they can download the data (128 GB) from one of the NIST data servers by using an assigned username and password, or by shipping a hard drive to the UK.

    In order to obtain the complete data set (MPEG-2 and AVI), you will need to send an appropriate hard drive (1TB) to HOSDB in the United Kingdom. You will also need to print and return a signed copy of the i-LIDS MCT End User Licence Agreement. Please ensure that you read and understand the terms and conditions of this licence before you return it to HOSDB.

    MCT Application Form: http://scienceandresearch.homeoffice.gov.uk/hosdb/publications/cctv-publications/FMCT_Dataset_Application_301.pdf

    End User License Agreement (EULA) for Commercial Companies: http://scienceandresearch.homeoffice.gov.uk/images/106966/356423/Legal_-_MCT_EULA_Commercial1.pdf

    End User License Agreement (EULA) for Academic Institutions: http://scienceandresearch.homeoffice.gov.uk/images/106966/356423/Legal_-_MCT_EULA_Academic_01.pdf

    Please be sure on the application form to indicate that you are applying in order to participate in the AVSS challenge. For more details please contact the i-LIDS team at HOSDB.

    email: i-lids@homeoffice.gsi.gov.uk
    voicemail: +44 1403 213823
    fax: +44 1403 213827
    http://science.homeoffice.gov.uk/hosdb

    Schedule (as of September 17, 2009)

    Date
    Milestone
    Feb 24 V0 Eval plan posted on web site
    April 1 Evaluation code distributed
    June 19

    Revised Evaluation code distributed

    May 1-July 10 Dry run evaluation period
    July 1 Registration Deadline
    July 10

    Dry Run evaluation ends

    Evaluation period begins

    July 29 Submission of system outputs due by 17:00 GMT
    Aug 5 Results distributed
    Sept 2-4 AVSS Conference
    Nov 30 Extended Evaluation Perio ends - system outputs due by 17:00 GMT
    Nov 4 Results distributed

    Related

    AVSS 2009 Home Page

    AVSS 2009 Registration Details

     

     

     

    Page Created: January 23, 2009
    Last Updated: April 8, 2010

    Multimodal Group is part of IAD and ITL
    NIST is an agency of the U.S. Department of Commerce
    Privacy Policy | Security Notices|
    Accessibility Statement | Disclaimer | FOIA