The common denominator of shape retrieval approaches is the creation of a shape descriptor or signature which captures the unique properties of the shape that distinguish it from shapes belonging to other classes on the one hand, and is invariant to a certain class of transformations a shape can undergo on the other. In rigid shape analysis, different types of invariance are rotation and translation. Dealing with nonrigid shapes requires compensating for the degrees of freedom resulting from deformations. In scanned shapes, connectivity and topological changes, noise, and holes are common plights. Other typical transformations encountered in shape retrieval problems, especially when dealing with Internet data include: scale, missing parts, different sampling and triangulation. The proposed benchmark evaluates the performance of shape retrieval on a largescale dataset under a wide variety of different transformations.
The task
SHREC'10 Robustness track simulates a largescale shape retrieval scenario, in which the queries include multiple modifications and transformations of the same shape. The benchmark allows evaluating how algorithms cope with certain classes of transformations and what is the strength of the transformations that can be dealt with.
The collection
The dataset used in this benchmark consists of 1184 shapes from three public domain collections:
TOSCA shapes,
Robert Sumner's shapes,
and
Princeton shapes. Each of the datasets is available in the public domain.
The shapes in the dataset include simulated transformations of different types and strengths, as detailed in the following.
The distribution is a zip file, containing 1184 shapes. Each shape is saved in .OFF format as a text file. The naming convention of the files is 000n.off, where n is an arbitrary shape number. The shape order is scrambled arbitrarily.
Retrieval dataset (50MB, zipped and password protected  contact michael.bronstein@usi.ch to obtain the password)
.OFF format reader (.m)
Queries
The query sets consists of 13 shape classes taken from a subset of the dataset with simulated transformations applied to them. For each shape, transformations are split into 10 classes (isometry, topology, small and big holes, global and local scaling, noise, shot noise, partial occlusion, sampling, and a combination of all transformations). In each class, the transformation appears in five different strength levels. The total number of transformations per shape is 55 plus null shape, and the total query set size is 728 shapes.
Evaluation method
Participants will submit a distance matrix for the dataset, measuring the dissimilarity between each pair of shapes.
Evaluation will be performed automatically by the organizers.
Performance is evaluated using precision/recall characteristic. Mean average precision (mAP) is used as a single measure of performance.
The performance is evaluated on the entire query set, in each transformation category, and for each transformation strength. This evaluation allows to see which algorithms are more sensitive to certain types of transformations, and also determine the transformations strength allowing for reasonable performance.
Submission format
Participants will submit a 1184x1184 matrix of pairwise shape distance values in zip with a single file in DIST format, containing 1184 rows of spacedelimited distance values, as follows:
0.23998567 1.24572606 3.51428826 ... 0.156789
The order in rows and columns should be in accordance with the shape numbers in the dataset (i.e., the mn entry in the matrix corresponds to distance between the shapes 000m.off and 000n.off).
.DIST format writer (.m)
Each participant can submit up to three results corresponding to different methods or settings of their algorithms. Name the zip file in the following convention lastname_firstname_algorithm.zip. Participants will indicate typical running time for their algorithm.
Training data
A separate set with a total of 624 shapes is optionally provided for training. The set includes 13 shape classes with null shapes same as in the main dataset and different transformed shapes with representative transformations of different classes and strengths.
The distribution is in xa zip file, containing 624 shapes. Each shape is saved in .OFF format as a text file. The naming convention of the files is 000n.xform.m.off, where n is the shape class number, xform is the transformation name and m is the transformation strength (15). 000n.null.0.off represents the null shape.
Retrieval training dataset (45MB, zipped)
Terms of use
SHREC benchmark is made available for public use.
Parts of the dataset used in this benchmark are copyright by Robert Sumner and
Thomas Funkhouser. Any use of the dataset should comply with the copyright holders requirements (such as citation of respective papers). Any use outside the scope of SHREC should be coordinated with respective copyright holders.
Any use of SHREC retrieval benchmark data or results should cite
A. M. Bronstein, M. M. Bronstein, U. Castellani, B. Falcidieno, A. Fusiello, A. Godil,
L. J. Guibas, I. Kokkinos, Z. Lian, M. Ovsjanikov, G. Patané, M. Spagnuolo, R. Toldo,
"SHREC 2010: robust largescale shape retrieval benchmark",
Proc. EUROGRAPHICS Workshop on 3D Object Retrieval (3DOR), 2010.
Please contact michael.bronstein@usi.ch for additional instructions or to get evaluation of your results.
