Shape Retrieval Contest (SHREC'10) Datasets - Correspondence benchmark
Correspondence and similarity are two intimately related problems in shape analysis. Defining optimal correspondence based on some structure preservation criterion, one can obtain a criterion of shape similarity as the amount of structure distortion. Finding correspondence between two shapes that would be invariant to a wide variety of transformations is thus a cornerstone problem in many approaches for shape similarity and retrieval.
SHREC'10 Correspondence track simulates one-to-one shape matching, in which the shapes to be matched are modifications of the same shape. The benchmark allows evaluating how algorithms cope with certain classes of transformations and what is the strength of the transformations that can be dealt with.
The dataset used in this benchmark consists of 138 high-resolution (10K-50K vertices) triangular meshes from TOSCA dataset (available in the public domain). The dataset is common with the SHREC 2010 Feature detection/description benchmark.
The dataset includes 3 shape classes (human, dog, horse) with simulated transformations. Transformations are split into 9 classes (isometry, topology, small and big holes, global and local scaling, noise, shot noise, sampling). Each transformation class appears in five different strength levels. The total number of transformations per shape class is 45 +1 null shape (no transformations applied).
The distribution is a zip file, containing 138 shapes. Each shape is saved in .OFF format as a text file. The naming convention of the files is 000n.xform.m.off, where n is the shape class number, xform is the transformation name and m is the transformation strength (1-5). 000n.null.0.off represents the null shape, to which the comparison is performed.
Correspondence dataset (110MB, zipped)
.OFF format reader (.m)
Participants will submit, for each of the transformations, a set of points and their correspondences in the null shape (note that both sparse and dense correspondences are allowed). Evaluation will be performed automatically by the organizers.
Evaluation criterion is the average and maximum geodesic distance between ground truth correspondence and one established by means of a tested algorithm (quantified as the geodesic distance between the true and estimated corresponding points). The performance is evaluated in each transformation category, and for each transformation strength. This evaluation allows seeing which algorithms are more sensitive to certain types of transformations, and also determine the transformations strength allowing for reasonable performance.
Participants will submit a set of 135 correspondence files in a single zip archive, a file per transformation.
The naming convention of the files is 000n.xform.m.corr, where n is the shape class number, xform is the transformation name as in the dataset and m is the transformation strength (1-5).
Each file will contain correspondence to the null shape (000n.null.0), represented as pairs of matching points in barycentric coordinates, as text lines of the format: tyi uyi vyi wyi txi uxi vxi wxi, where: txi is a triangle in the transformed shape, uxi+vxi+wxi=1 are the non-negative barycentric weights in triangle txi;
tyi is the corresponding triangle in the null shape, uyi+vyi+wyi=1 are the non-negative barycentric weights in triangle tyi; i=1,..., number of corresponding points. Example:
68116 0.23998567 0.24572606 0.51428826 50375 0.33333333 0.33333333 0.33333333
The baricentric coordinates are the most generic representation for correspondences. If the participating algorithm is designed to find vertex-to-vertex correspondences, use the baricentric representation to encode vertices (e.g., 68116 1 0 0).
Example of correspondence file
MATLAB code (2MB, zipped) for visualization of correspondence demonstrating the use of the format
.CORR format writer (.m)
Each participant can submit up to three results corresponding to different methods or settings of their algorithms. Name the zip file in the following convention lastname_firstname_algorithm.zip. Participants will indicate typical running time for their algorithm.
SHREC benchmark is made available for public use. Any use of SHREC correspondence benchmark data or results should cite
A. M. Bronstein, M. M. Bronstein, U. Castellani, A. Dubrovina, L. J. Guibas, R. P. Horaud, R. Kimmel,
D. Knossow, E. von Lavante, D. Mateus, M. Ovsjanikov, A. Sharma,
"SHREC 2010: robust correspondence benchmark",
Proc. EUROGRAPHICS Workshop on 3D Object Retrieval (3DOR), 2010.
Please contact email@example.com for additional instructions or to get evaluation of your results.