AVSS May 6, 2009 Telecon Minutes:
Keni, Martin Karlsruhe
- Mark has sent data to: Karlsruhe (Keni), Brno (Peter), QMUL
(Comp. Sci. Dept.)
- The dataset for AVSS is different from the TRECVid 2008 data.
- The AVSS nd TRECVid 2009 eval will use the same MCTTR data set
- Training/test division
- The MCTTR data will be divided into a Train/Development set and
an evaluation set with roughly 2/3 and 1/3 porportions
respectively. Participants MUST take care not to use the
eval set for training.
- Question: What will be the evaluation proceedure?
- Participants will send their system outputs to NIST for
scoring. Reference annotations will be released afterwards for self
- Question: Is anything else annotated besides bounding boxes for
the track person?
- No. Some subject information is available in the
- Question: Are the ground truth files ViPER compatible?
- Yes and No. The base annotations were created with
SABRE. The SABRE file format is uses the ViPER format but they do
not load in the ViPER annotation tool. For the evaluation, the
SABRE annotations will be converted to the CLEAR-specific ViPER format
used for VACE and CLEAR so that the CLEAR evaluation tools can be used.
The training annotations will be released presently.
- Action Item: HOSDB to
post download instructions for their tool (SABRE)
- Question: What is annotated, every frame or only i-frames?
- Answer: The person is annotated every 5 frames after the
subject is 100% within frame for Cams 1,2, 3, 5 and 75% within frame
for Cam 4 (elevator)
- NOTE: As indicated in the graphic, the annotated frame is NOT
the same across cameras for a particular person. It is not
possible to define (by rule for instance) which frame was
annotated. If participants want down-sample the video, NIST will
provide a starter file for each camera to indicate which frames are
- The annotation/scoring
graphic was discussed.
- In the annotation graphic, there is a set of frames between
the person begins appearing in the frame (pointed up triangle) and when
the annotation begins (blue annotated frames) (see the above
rules). Currently the subject's entrance into the frame is not
annotated. NIST does not think these frames should be
evaluated: There are two possibles solutions for the missing
information: annotate the data with entrance times or declare a fixed
no-score region around before subject annotation begins and after the
subject annotation ends for.
- Action Item: Mark
will check how the HO scorer handles the condition.
- Action Item: NIST will
create an additional slide illustrating the metrics
- Question: What is the motivation for the camera pair task:
- More trials (i.e., instances of a tracked person) are possible
with the camera pair task vs. multi-camera tracking (i.e., all 5
- Question: What is the required task?
- There is a desire to have one strong condition and have
additional, optional tasks for diagnostic purposes
- After a discussion: HO, Karlsruhe and NIST prefer making the
multi-camera tracking task the primary task.
- An evaluation tool will be released presently
- Question: Is the evaluation tool up to date?
- Answer: No. Action Item:
NIST will revise it
- Question: Is there a participation registration form?
- Answer: No. Action Item:
NIST will provide a registration form