The self-consistency methodology, a new paradigm for evaluating certain vision problems without relying extensively on ground truth, is introduced in this paper. The idea is inspired by a remarkable property of the human visual system: given a static natural scene, the perceptual inferences the system makes from one viewpoint are almost always consistent with the inferences it makes from a different viewpoint.
For point-correspondence algorithms, the methodology consists of applying the algorithm independently to subsets of images, obtained by varying the camera geometry while keeping 3D object geometry constant. Matches that should correspond to the same surface element in 3D are collected to create statistics that are then used as a measure of the accuracy and reliability of the algorithms. These statistics can then be used to predict the accuracy and reliability of the algorithm applied to new images of new scenes.
Although the authors concentrate on the performance evaluation of multi-viewpoint correspondence algorithms, they indicate how the consistency methodology can be extended to other computer vision algorithms. The self-consistency distribution is a very simple idea that has powerful consequences. It can be used to compare algorithms and scoring functions, evaluate the performance of an algorithm across different classes of scenes, tune algorithm parameters such as window size, and reliably detect changes in a scene
The paper is clearly written, and indicates previous work on estimating uncertainty without ground truth. A complete set of references is provided.