SHREC 2014 - Extended Large Scale Sketch-Based 3D Shape Retrieval

CVIU journal information
We published an extended CVIU journal based on the SHREC'14 Sketch Track and Comprehensive Track! Please see the CVIU paper on the bottom.

The objective of this track is to evaluate the performances of different sketch-based 3D model retrieval algorithms using a large scale hand-drawn sketch query dataset on a generic 3D model dataset.

Sketch-based 3D model retrieval is to retrieve relevant 3D models using sketch(es) as input. This scheme is intuitive and convenient for users to learn and search for 3D models. It is also popular and important for related applications such as sketch-based modeling and recognition, as well as 3D animation production via 3D reconstruction of a scene of 2D storyboard [1].

However, most existing 3D model retrieval algorithms target the Query-by-Model framework which uses existing 3D models as queries. Much less research work has been done regarding the Query-by-Sketch framework. In SHREC’12 [2] and SHREC’13 [3], two tracks have been successfully organized on the topic of sketch-based 3D model retrieval. They foster this research area by providing a small-scale and a large-scale sketch-based retrieval benchmark respectively and attracting state-of-the-art algorithms to participate and compete each other. However, even the large scale SHREC’13 Sketch Track Benchmark (SHREC13STB) [3] based on Eitz et al. [4] and Princeton Shape Benchmark (PSB) [5] contains only 90 classes of 7200 sketches and 1258 models. Compared to the complete dataset of 250 classes built in Eitz et al. [4], there is still much room left for further improvement to make it more comprehensive in terms of completeness of object classes existing in the real world. Thus, it is highly necessary to build up an even larger sketch-based 3D retrieval benchmark in terms of both sketches and models to help us to further evaluate the scalability of existing or newly developed sketch-based 3D model retrieval algorithms.

Considering this, we build a SHREC’14 Large Scale Sketch Track Benchmark (SHREC14LSSTB) by extending the SHREC’13 Sketch Track Benchmark (SHREC13STB) [3] by means of identifying and consolidating relevant models for the 250 classes of sketches from the major previously proposed 3D object retrieval benchmarks. These previous benchmarks have been compiled with different goals in mind and to date, not been considered in their sum. Our work is the first to integrate them to form a new, larger benchmark corpus for sketch-based retrieval.

Specifically, besides the PSB used in SHREC13STB, the other available 3D model benchmark sources considered include the SHREC’12 Generic Track Benchmark (SHREC12GTB) [6], the Toyohashi Shape Benchmark (TSB) [7], the Konstanz 3D Model Benchmark (CCCC) [8], the Watertight Model Benchmark (WMB) [9], the McGill 3D Shape Benchmark (MSB) [10], Bonn's Architecture Benchmark (BAB) [11], and the Engineering Shape Benchmark (ESB) [12]. Table 1 lists their basic classification information while Fig. 1 shows some example models for the four specific benchmarks. Totally, this large-scale benchmark has 13,680 sketches and 8,987 models, classified into 171 classes.

Table 1 Classification information of the 8 selected source benchmarks.

Fig. 1 Example 3D models in ESB, MSB, WMB and BAB datasets

Based on this new benchmark, we organize this track to further foster this challenging research area by soliciting retrieval results from current state-of-the-art retrieval methods for comparison, especially in terms of scalability. We will also provide corresponding evaluation code for computing a set of performance metrics similar to those used in the Query-by-Model retrieval technique.

Benchmark Overview
Our extended large scale sketch-based 3D model retrieval benchmark is motivated by a latest large collection of human sketches built by Eitz et al. [4] and SHREC’13 Sketch Track Benchmark (SHREC13STB) [3].

To explore how human draw sketches and human sketch recognition, Eitz et al. [4] collected 20,000 human-drawn sketches, categorized into 250 classes, each with 80 sketches. This sketch dataset is exhaustive in terms of the number of object categories. More importantly, it avoids the bias issue since they collect the same number of sketches for every class and the number of sketches for one class is also adequate for a large scale retrieval benchmark. The sketch variation within one class is also adequate enough.

SHREC’13 Sketch Track Benchmark (SHREC13STB) [3] has found 1258 relevant models for 90 of the total 250 classes from the PSB benchmark. However, it is not complete and large enough. A majority of 160 classes has not been included. Thus, we believe a new sketch-based 3D model retrieval benchmark based on the above two datasets, but extended by finding more models from other 3D data sources will be more comprehensive and appropriate for the evaluation of a sketch-based 3D model retrieval algorithm, especially for the property of scalability which is very important for practical applications.

Considering this and to build up a better and more comprehensive large-scale sketch-based 3D retrieval benchmark, we extend the search to other available benchmarks, as mentioned above. We have found 8,987 models for 171 classes, which substantially increase the scale of the benchmark and form a currently largest large scale sketch-based retrieval benchmark. We adopt a voting scheme to classify models. For each classification, we have at least two votes. If these two votes agree each other, we confirm that the classification is correct, otherwise, we perform a third vote to finalize the classification. This benchmark provides an important resource for the community of sketch-based 3D retrieval and will foster the development of practical sketch-based 3D retrieval applications. Fig. 2 show several example sketches and their relevant models.
Fig. 2 Example 2D sketches and their relevant 3D models in the benchmark

We randomly select 50 sketches from each class for training and use the remained 30 sketches per class for testing, while the relevant models as a whole are remained as the target dataset. Participants need to submit results on the training and testing datasets, respectively. To provide a complete reference for the future users of our benchmark, we will evaluate the participating algorithms on both the testing dataset and the complete benchmark.

2D Sketch Dataset
The 2D sketch query set contains the around 13,680 sketches (171 classes) from human sketch recognition dataset, each of which has relevant shapes in the 3D model collection,

3D Model Dataset
The 3D model dataset in this benchmark is built on the data collections from SHREC'13 Sketch Track Benchmark (SHREC13STB), SHREC'12 Generic Track Benchmark (SHREC12STB), Toyohashi Shape Benchmark (TSB), Engineering Shape Benchmark (ESB), Konstanz 3D Model Benchmark (CCCC), McGill 3D Shape Benchmark (MSB), Watertight Model Benchmark (WMB), and Bonn's Architecture benchmark (BAB). In total, the distribution is a zip file, containing 8,987 models classified into 171 classes. Each model is saved in .OFF format as a text file. The dataset will be available for download soon.

The Ground Truth
All the sketches and models are already categorized according to the classifications in Eitz et al. [4] and the selected source benchmarks, respectively. In our classification and evaluation, we adopt the class names in Eitz et al. [4].

Evaluation Method
Participants will submit the dissimilarity matrix, and the evaluation will be performed automatically by the organizers. The performance is evaluated by Precision-Recall (PR) graph, Nearest Neighbor (NN), First Tier (FT), Second Tier (ST), E-Measures (E), Discounted Cumulated Gain (DCG) and Average Precision (AP). We also have developed the code to compute them.

The Procedural Aspects


January 1 - Call for participation.
January 10 - Few sample sketches and target models will be available on line.
January 15 - Please register before this date.
January 17 - Distribution of the database.
February 17 - Submission the results on the complete dataset and a one page description of their method(s).
February 20 - Distribution of evaluation results.
February 22 - Track is finished, and results are ready for inclusion in a track report.
February 25 - Submit the track paper for review.
March 1 - All reviews due, feedback and notifications.
March 6 - Camera ready track paper submitted for inclusion in the proceedings.
April 6 - Eurographics Workshop on 3D Object Retrieval including SHREC 2014.

Bo Li, Yijuan Lu - Texas State University, USA
Chunyuan Li, Afzal Godil - National Institute of Standards and Technology, USA
Tobias Schreck - University of Konstanz, Germany

We would like to thank Mathias Eitz, James Hays and Marc Alexa who collected the 250 classes of sketches. We would also like to thank following authors for building the 3D benchmarks: Note: Approval for the usage of the above data for the track has been obtained.

[1] Anh-PhuongTa, ChristianWolf, Guillaume Lavoue, and Atilla Baskurt, 3D object detection and viewpoint selection in sketch images using local patch-based Zernike moments, in CBMI, pp. 189–194, 2009
[2] Bo Li, Tobias Schreck, Afzal Godil, Marc Alexa, Tamy Boubekeur, Benjamin Bustos, J. Chen, Mathias Eitz, Takahiko Furuya, Kristian Hildebrand, S. Huang, Henry Johan, Arjan Kuijper, Ryutarou Ohbuchi, Ronald Richter, Jose M. Saavedra, Maximilian Scherer, Tomohiro Yanagimachi, Gang-Joon Yoon, and Sang Min Yoon, SHREC’12 track: Sketch-based 3D shape retrieval, in 3DOR, pp. 109–118, 2012
[3] Bo Li, Yijuan Lu, Afzal Godil, Tobias Schreck, Masaki Aono, Henry Johan, Jose M. Saavedra, Shoki Tashiro: SHREC'13 Track: Large Scale Sketch-Based 3D Shape Retrieval. 3DOR 2013, pp. 89-96, 2013
[4] Mathias Eitz, James Hays, Marc Alexa, How do humans sketch objects? ACM Trans. Graph. 31(4): 44, 2012
[5] Philip Shilane, Patrick Min, Michael M. Kazhdan, Thomas A. Funkhouser, The Princeton Shape Benchmark, SMI 2004, pp. 167-178, 2004
[6] Bo Li, Afzal Godil, Masaki Aono, X. Bai, Takahiko Furuya, L. Li, Roberto Javier López-Sastre, Henry Johan, Ryutarou Ohbuchi, Carolina Redondo-Cabrera, Atsushi Tatsuma, Tomohiro Yanagimachi, S. Zhang: SHREC'12 Track: Generic 3D Shape Retrieval. 3DOR 2012: 119-126, 2012
[7]Atsushi Tatsuma, Hitoshi Koyanagi, Masaki Aono, A Large-Scale Shape Benchmark for 3D Object Retrieval: Toyohashi Shape Benchmark, In Proc. of 2012 Asia Pacific Signal and Information Processing Association (APSIPA2012), Hollywood, California, USA, 2012
[8] Vranic Dejan, 3D model retrieval. PhD thesis, University of Leipzig, 2004
[9] Veltkamp Remco, Ter Harr Frank: SHREC 2007 3D Retrieval Contest. Technical Report UU-CS-2007-015, Department of Information and Computing Sciences, Utrecht University, 2007
[10] Kaleem Siddiqi, Juan Zhang, Diego Macrini, Ali Shokoufandeh, Sylvain Bouix, Sven J. Dickinson: Retrieving articulated 3-D models using medial surfaces. Mach. Vis. Appl. 19(4): 261-275, 2008
[11] Raoul Wessel, Ina Blümel, Reinhard Klein: A 3D Shape Benchmark for Retrieval and Automatic Classification of Architectural Data. 3DOR 2009: 53-56, 2009
[12] Subramaniam Jayanti, Yagnanarayanan Kalyanaraman, Natraj Iyer, Karthik Ramani: Developing an engineering shape benchmark for CAD models. Computer-Aided Design.38(9): 939-953, 2006

Please cite the CVIU journal and 3DOR'14 papers

[1] Bo Li, Yijuan Lu, Chunyuan Li, Afzal Godil, Tobias Schreck, Masaki Aono, Martin Burtscher, Qiang Chen, Nihad Karim Chowdhury, Bin Fang, Hongbo Fu, Takahiko Furuya, Haisheng Li, Jianzhuang Liu, Henry Johan, Ryuichi Kosaka, Hitoshi Koyanagi, Ryutarou Ohbuchi, Atsushi Tatsuma, Yajuan Wan, Chaoli Zhang, Changqing Zou. A Comparison of 3D Shape Retrieval Methods Based on a Large-scale Benchmark Supporting Multimodal Queries. Computer Vision and Image Understanding, November 4, 2014.

[2] Bo Li, Yijuan Lu, Chunyuan Li, Afzal Godil, Tobias Schreck, Masaki Aono, Martin Burtscher, Hongbo Fu, Takahiko Furuya, Henry Johan, Jianzhuang Liu, Ryutarou Ohbuchi, Atsushi Tatsuma, Changqing Zou. SHREC' 14 Track: Extended Large Scale Sketch-Based 3D Shape Retrieval. Eurographics Workshop on 3D Object Retrieval 2014 (3DOR 2014): 121-130, 2014.