Home Authors Book Resources Course Academic TOSCA
Tutorials
Software
Data
Bibliography
Figures
Links




Shape Retrieval Contest (SHREC'10) Datasets - Feature detection and description benchmark



Feature-based approaches have recently become very popular in computer vision and image analysis application. Using these approaches, an image is described as a collection of local features ("visual words"), resulting in a representation referred to as a "bag of features". The bag of features paradigm relies heavily on the choice of the local feature descriptor. A common comparison of different image feature detection and description algorithms is the stability of the detected features and their invariance to different transformations applied to an image. In shape analysis, feature-based approaches have been introduced more recently and are becoming a promising direction in shape retrieval applications. Currently, there is no benchmark similar to those in the computer vision literature to test the performance of feature detection and description in shapes.


The task

The present benchmark simulates feature detection and description stage of feature-based shape retrieval algorithms. The benchmark tests the performance of shape feature detectors and descriptors under a wide variety of different transformations. The benchmark allows evaluating how algorithms cope with certain classes of transformations and what is the strength of the transformations that can be dealt with.


The collection

The dataset used in this benchmark consists of 138 high-resolution (10K-50K vertices) triangular meshes from TOSCA dataset (available in the public domain). The dataset is common with the SHREC 2010 Correspondence benchmark. The dataset includes 3 shape classes (human, dog, horse) with simulated transformations. Transformations are split into 9 classes (isometry, topology, small and big holes, global and local scaling, noise, shot noise, sampling). Each transformation class appears in five different strength levels. The total number of transformations per shape class is 45 +1 null shape (no transformations applied). The distribution is a zip file, containing 138 shapes. Each shape is saved in .OFF format as a text file. The naming convention of the files is 000n.xform.m.off, where n is the shape class number, xform is the transformation name and m is the transformation strength (1-5). 000n.null.0.off represents the null shape, to which the comparison is performed.

Feature detection/description dataset (110MB, zipped)
.OFF format reader (.m)



Evaluation method

Evaluation is split into two parts: feature detection and feature description. Participants will submit, for each of the shape in the dataset, a set of detected feature points, and corresponding descriptors. It is possible to submit only detected points without descriptors. If the participants are willing to test dense descriptors, a feature point file containing all the points of the shape must be submitted. Evaluation will be performed automatically by the organizers. For feature detection, the criterion is repeatability, defined as the percentage of feature points detected correctly in a transformed version of the shape compared to the corresponding feature points detected in the null shape. This is a criterion used in computer vision benchmarks for feature detectors. For feature description, average descriptor similarity in corresponding points is measured. The performance is evaluated in each transformation category, and for each transformation strength. This evaluation allows seing which algorithms are more sensitive to certain types of transformations, and also determine the transformations strength allowing for reasonable performance.


Submission format for feature points

Participants willing to test feature detection algorithms will submit a set of 138 feature point files in a single zip archive, a file per shape. The naming convention of the files is same as for the shapes in the dataset: 000n.xform.m.feat, where n is the shape class number, xform is the transformation name as in the dataset and m is the transformation strength (1-5), and 000n.null.0.feat for the null shape. Detected points will be represented in barycentric coordinates, as text lines of the format: ti ui vi wi, where: ti is a triangle index, ui+vi+wi=1 are the non-negative barycentric weights in triangle ti; i=1,..., number of detected feature points. Example:

68116 0.23998567 0.24572606 0.51428826

The baricentric coordinates are the most generic representation for points on triangular meshes. If the participating algorithm is designed to work on shape vertices, use the baricentric representation to encode vertices (e.g., 68116 1 0 0).

Example of feature point file
.FEAT format writer (.m)

Each participant can submit up to three results corresponding to different methods or settings of their algorithms. Name the zip file in the following convention lastname_firstname_algorithm.zip. Participants will indicate typical running time for their algorithm.


Submission format for feature descriptors

Participants willing to test feature descriptor algorithms will submit a set of 138 feature descriptor files in a single zip archive, a file per shape. The naming convention of the files is same as for the shapes in the dataset: 000n.xform.m.desc, where n is the shape class number, xform is the transformation name as in the dataset and m is the transformation strength (1-5), and 000n.null.0.desc for the null shape. The file will contain the descriptors corresponding to the detected feature points in the same order, represented as lines of the format: xi1 xi2 ... xiK, where: xik is the kth coordinate of the ith K-dimensional descriptor vector corresponding to the feature point ti ui vi wi in the feature point file; i=1,...,N number of detected feature points. Example:

1.0123 0.0125 0.438 ... 0.1993

Example of feature descriptor file (with descriptor corresponding to points in the above example of feature point file)
.DESC format writer (.m)

Each participant can submit up to three results corresponding to different methods or settings of their algorithms. Name the zip file in the following convention lastname_firstname_algorithm.zip. Participants will indicate typical running time for their algorithm.

MATLAB code (1MB, zipped) for visualization of feature points and descriptors demonstrating the use of both formats


Terms of use

SHREC benchmark is made available for public use. Any use of SHREC feature detection and description benchmark data or results should cite

A. M. Bronstein, M. M. Bronstein, B. Bustos, U. Castellani, M. Crisani, B. Falcidieno, L. J. Guibas, I. Kokkinos, V. Murino, M. Ovsjanikov, G. Patané, I. Sipiran, M. Spagnuolo, J. Sun, "SHREC 2010: robust feature detection and description benchmark", Proc. EUROGRAPHICS Workshop on 3D Object Retrieval (3DOR), 2010.

Please contact michael.bronstein@usi.ch for additional instructions or to get evaluation of your results.