Results 1  10
of
52
Randomized trees for realtime keypoint recognition
 In Proc. Int. Conf. on Computer Vision and Pattern Recognition (CVPR 2005
, 2005
"... In earlier work, we proposed treating wide baseline matching of feature points as a classification problem, in which each class corresponds to the set of all possible views of such a point. We used a Kmean plus Nearest Neighbor classifier to validate our approach, mostly because it was simple to im ..."
Abstract

Cited by 115 (4 self)
 Add to MetaCart
In earlier work, we proposed treating wide baseline matching of feature points as a classification problem, in which each class corresponds to the set of all possible views of such a point. We used a Kmean plus Nearest Neighbor classifier to validate our approach, mostly because it was simple to implement. It has proved effective but still too slow for realtime use. In this paper, we advocate instead the use of randomized trees as the classification technique. It is both fast enough for realtime performance and more robust. It also gives us a principled way not only to match keypoints but to select during a training phase those that are the most recognizable ones. This results in a realtime system able to detect and position in 3D planar, nonplanar, and even deformable objects. It is robust to illuminations changes, scale changes and occlusions. 1.
Towards a coherent statistical framework for dense deformable template estimation
 J.R. Statist. Soc.B
, 2006
"... Abstract. The problem of estimating probabilistic deformable template models in the field of computer vision or of probabilistic atlases in the field of computational anatomy has not yet received a coherent statistical formulation and remains a challenge. In this paper, we provide a careful definiti ..."
Abstract

Cited by 50 (8 self)
 Add to MetaCart
Abstract. The problem of estimating probabilistic deformable template models in the field of computer vision or of probabilistic atlases in the field of computational anatomy has not yet received a coherent statistical formulation and remains a challenge. In this paper, we provide a careful definition and analysis of a well defined statistical model based on dense deformable templates for gray level images of deformable objects. We propose a rigorous Bayesian framework for which we can derived an iterative algorithm for the effective estimation of the geometric and photometric parameters of the model in a small sample setting, together with an asymptotic consistency proof. The model is extended to mixtures of finite numbers of such components leading to a fine description of the photometric and geometric variations. We illustrate some of the ideas with images of handwritten digits, and apply the estimated models to classification through maximum likelihood. 1.
A bayesian, exemplarbased approach to hierarchical shape matching
 IEEE Trans. Pattern Anal. Mach. Intell
"... Abstract—This paper presents a novel probabilistic approach to hierarchical, exemplarbased shape matching. No feature correspondence is needed among exemplars, just a suitable pairwise similarity measure. The approach uses a template tree to efficiently represent and match the variety of shape exem ..."
Abstract

Cited by 46 (7 self)
 Add to MetaCart
Abstract—This paper presents a novel probabilistic approach to hierarchical, exemplarbased shape matching. No feature correspondence is needed among exemplars, just a suitable pairwise similarity measure. The approach uses a template tree to efficiently represent and match the variety of shape exemplars. The tree is generated offline by a bottomup clustering approach using stochastic optimization. Online matching involves a simultaneous coarsetofine approach over the template tree and over the transformation parameters. The main contribution of this paper is a Bayesian model to estimate the a posteriori probability of the object class, after a certain match at a node of the tree. This model takes into account object scale and saliency and allows for a principled setting of the matching thresholds such that unpromising paths in the tree traversal process are eliminated early on. The proposed approach was tested in a variety of application domains. Here, results are presented on one of the more challenging domains: realtime pedestrian detection from a moving vehicle. A significant speedup is obtained when comparing the proposed probabilistic matching approach with a manually tuned nonprobabilistic variant, both utilizing the same template tree structure. Index Terms—Hierarchical shape matching, chamfer distance, Bayesian models. 1
Hierarchical testing designs for pattern recognition
, 2003
"... We explore the theoretical foundations of a “twenty questions” approach to pattern recognition. The object of the analysis is the computational process itself rather than probability distributions (Bayesian inference) or decision boundaries (statistical learning). Our formulation is motivated by app ..."
Abstract

Cited by 38 (8 self)
 Add to MetaCart
We explore the theoretical foundations of a “twenty questions” approach to pattern recognition. The object of the analysis is the computational process itself rather than probability distributions (Bayesian inference) or decision boundaries (statistical learning). Our formulation is motivated by applications to scene interpretation in which there are a great many possible explanations for the data, one (“background”) is statistically dominant, and it is imperative to restrict intensive computation to genuinely ambiguous regions. The focus here is then on pattern filtering: Given a large set Y of possible patterns or explanations, narrow down the true one Y to a small (random) subset ̂Y ⊂ Y of “detected ” patterns to be subjected to further, more intense, processing. To this end, we consider a family of hypothesis tests for Y ∈ A versus the nonspecific alternatives Y ∈ A c. Each test has null type I error and the candidate sets A ⊂ Y are arranged in a hierarchy of nested partitions. These tests are then
Pop: Patchwork of parts models for object recognition
 International Journal of Computer Vision
, 2004
"... We formulate a deformable template model for objects with a clearly defined mechanism for parameter estimation. A separate model is estimated for each class, and classification is likelihood based no discrmination boundaries are learned. Nonetheless high classification rates are achieved with smal ..."
Abstract

Cited by 38 (3 self)
 Add to MetaCart
We formulate a deformable template model for objects with a clearly defined mechanism for parameter estimation. A separate model is estimated for each class, and classification is likelihood based no discrmination boundaries are learned. Nonetheless high classification rates are achieved with small training samples. The data models are defined on binary oriented edge features that are highly robust to photometric variation and small local deformations. The deformation of an object is defined in terms of locations of a moderate number reference points. Each reference point is associated with a part a probability map assigning a probability for each edge type at each pixel in a window. The likelihood of the edge data on the entire image conditional on the deformation is described as a patchwork of parts (POP) model the edges are assumed conditionally independent, and the marginal at each pixel is obtained by a patchwork operation: averaging the marginal probabilities contributed by each part covering the pixel. Object classes are modeled as mixtures of POP models that are discovered sequentially as more class data is observed. Experiments are presented on the MNIST database, hundreds of deformed LATEX shapes, reading zipcodes, and face detection. 1
Learning and using taxonomies for fast visual categorization
 In CVPR
"... The computational complexity of current visual categorization algorithms scales linearly at best with the number of categories. The goal of classifying simultaneously Ncat = 104−105 visual categories requires sublinear classification costs. We explore algorithms for automatically building classific ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
The computational complexity of current visual categorization algorithms scales linearly at best with the number of categories. The goal of classifying simultaneously Ncat = 104−105 visual categories requires sublinear classification costs. We explore algorithms for automatically building classification trees which have, in principle, log Ncat complexity. We find that a greedy algorithm that recursively splits the set of categories into the two minimally confused subsets achieves 520 fold speedups at a small cost in classification performance. Our approach is independent of the specific classification algorithm used. A welcome byproduct of our algorithm is a very reasonable taxonomy of the Caltech256 dataset. 1.
Nearoptimal detection of geometric objects by fast multiscale methods
 IEEE Trans. Inform. Theory
, 2005
"... Abstract—We construct detectors for “geometric ” objects in noisy data. Examples include a detector for presence of a line segment of unknown length, position, and orientation in twodimensional image data with additive white Gaussian noise. We focus on the following two issues. i) The optimal detec ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
Abstract—We construct detectors for “geometric ” objects in noisy data. Examples include a detector for presence of a line segment of unknown length, position, and orientation in twodimensional image data with additive white Gaussian noise. We focus on the following two issues. i) The optimal detection threshold—i.e., the signal strength below which no method of detection can be successful for large dataset size. ii) The optimal computational complexity of a nearoptimal detector, i.e., the complexity required to detect signals slightly exceeding the detection threshold. We describe a general approach to such problems which covers several classes of geometrically defined signals; for example, with onedimensional data, signals having elevated mean on an interval, and, indimensional data, signals with elevated mean on a rectangle, a ball, or an ellipsoid. In all these problems, we show that a naive or straightforward approach leads to detector thresholds and algorithms which are asymptotically far away from optimal. At the same time, a multiscale geometric analysis of these classes of objects allows us to derive asymptotically optimal detection thresholds and fast algorithms for nearoptimal detectors. Index Terms—Beamlets, detecting hot spots, detecting line segments, Hough transform, image processing, maxima of Gaussian processes, multiscale geometric analysis, Radon transform. I.
PartBased Statistical Models for Object Classification and Detection
 Proc. IEEE Conf. Computer Vision and Pattern Recognition
, 2005
"... We propose using simple mixture models to define a set of midlevel binary local features based on binary oriented edge input. The features capture natural local structures in the data and yield very high classification rates when used with a variety of classifiers trained on small training sets, ex ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
We propose using simple mixture models to define a set of midlevel binary local features based on binary oriented edge input. The features capture natural local structures in the data and yield very high classification rates when used with a variety of classifiers trained on small training sets, exhibiting robustness to degradation with clutter. Of particular interest are the use of the features as variables in simple statistical models for the objects thus enabling likelihood based classification. Pretraining decision boundaries between classes, a necessary component of nonparametric techniques, is thus avoided. Class models are trained separately with no need to access data of other classes. Experimental results are presented for handwritten character recognition, classification of deformed L ATEX symbols involving hundreds of classes, and side view car detection. 1.
Hierarchical Learning of Curves Application to Guidewire Localization in Fluoroscopy
"... In this paper we present a method for learning a curve model for detection and segmentation by closely integrating a hierarchical curve representation using generative and discriminative models with a hierarchical inference algorithm. We apply this method to the problem of automatic localization of ..."
Abstract

Cited by 17 (8 self)
 Add to MetaCart
In this paper we present a method for learning a curve model for detection and segmentation by closely integrating a hierarchical curve representation using generative and discriminative models with a hierarchical inference algorithm. We apply this method to the problem of automatic localization of the guidewire in fluoroscopic sequences. In fluoroscopic sequences, the guidewire appears as a hardly visible, nonrigid onedimensional curve. Our paper has three main contributions. Firstly, we present a novel method to learn the complex shape and appearance of a freeform curve using a hierarchical model of curves of increasing degrees of complexity and a database of manual annotations. Secondly, we present a novel computational paradigm in the context of Marginal Space Learning, in which the algorithm is closely integrated with the hierarchical representation to obtain fast parameter inference. Thirdly, to our knowledge this is the first full system which robustly localizes the whole guidewire and has extensive validation on hundreds of frames. We present very good quantitative and qualitative results on real fluoroscopic video sequences, obtained in just one second per frame. 1.