Results 1  10
of
10
Recognition Using Region Correspondences
 International Journal of Computer Vision
, 1995
"... A central problem in object recognition is to determine the transformation that relates the model to the image, given some partial correspondence between the two. This is useful in determining whether an object is present in an image, and if so, determining where the object is. We present a novel me ..."
Abstract

Cited by 34 (7 self)
 Add to MetaCart
A central problem in object recognition is to determine the transformation that relates the model to the image, given some partial correspondence between the two. This is useful in determining whether an object is present in an image, and if so, determining where the object is. We present a novel method of solving this problem that uses region information. In our approach the model is divided into volumes, and the image is divided into regions. Given a match between subsets of volumes and regions (without any explicit correspondence between different pieces of the regions) the alignment transformation is computed. The method applies to planar objects under similarity, affine, and projective transformations and to projections of 3D objects undergoing affine and projective transformations. 1 Introduction A fundamental problem in recognition is pose estimation. Given a correspondence between some portions of an object model and some portions of an image, determine the transformation th...
Space Efficient 3D Model Indexing
 In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, 1992
"... We show that the set of 2D images produced by the point features of a rigid 3D model can be represented with two lines in two highdimensional spaces. These lines are the lowestdimensional representation possible. We use this result to build a system for representing in a hash table at compile time ..."
Abstract

Cited by 28 (4 self)
 Add to MetaCart
We show that the set of 2D images produced by the point features of a rigid 3D model can be represented with two lines in two highdimensional spaces. These lines are the lowestdimensional representation possible. We use this result to build a system for representing in a hash table at compile time, all the images that groups of model features can produce. Then at run time a group of image features can access the table and find all model groups that could match it. This table is efficient in terms of space, and is built and accessed through analytic methods that account for the effect of sensing error. In real images, it reduces the set of potential matches by a factor of several thousand. We also use this representation of a model's images to analyze two other approaches to recognition: invariants, and nonaccidental properties. These are properties of images that some models always produce, and all other models either never produce (invariants) or almost never produce (nonaccidental properties). In several domains we determine when invariants exist. In general we show that there are an infinite set of nonaccidenta properties that are qualitatively similar.
Grouping for Recognition
, 1989
"... This paper presents a new method of grouping edges in order to recognize objects. This grouping method succeeds on images of both two and threedimensional objects. We order groups of edges based on the likelihood that a single object produced them. This allows the recognition system to consider fi ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
This paper presents a new method of grouping edges in order to recognize objects. This grouping method succeeds on images of both two and threedimensional objects. We order groups of edges based on the likelihood that a single object produced them. This allows the recognition system to consider first the collections of edges most likely to lead to the correct recognition of objects. The grouping module estimates this likelihood using the distance that separates edges and their relative orientation. This ordering greatly reduces the amount of computation required to locate objects. Surprisingly, in some circumstances grouping can also improve the accuracy of a recognition system. We test the grouping system in two ways. First, we use it in a recognition system that handles libraries of twodimensional, polygonal objects. Second, we show comparable performance of the grouping system on images of two and threedimensional objects. This demonstrates that the grouping system could produce...
What makes viewpoint invariant properties perceptually salient?: A computational perspective
 in Perceptual Organization for Artificial Vision Systems, K.L. Boyer, Ed
, 2000
"... It has been noted that many of the perceptually salient image properties identified by the Gestalt psychologists, such as collinearity, parallelism, and good continuation, are invariant to changes in viewpoint. However, we show that viewpoint invariance is not sufficient to distinguish these Gestalt ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
It has been noted that many of the perceptually salient image properties identified by the Gestalt psychologists, such as collinearity, parallelism, and good continuation, are invariant to changes in viewpoint. However, we show that viewpoint invariance is not sufficient to distinguish these Gestalt properties; one can define an infinite number of viewpoint invariant properties that are not perceptually salient. We then show that generally, the perceptually salient viewpoint invariant properties are minimal, in the sense that they can be derived using less image information than nonsalient properties. This provides support for the hypothesis that the biological relevance of an image property is determined both by the extent to which it provides information about the world and by the ease with which this property can be computed.
3D to 2D Pose Determination with Regions
 International Journal of Computer Vision
, 1999
"... This paper presents a novel approach to partsbased object recognition in the presence of occlusion. We focus on the problem of determining the pose of a 3D object from a single 2D image when convex parts of the object have been matched to corresponding regions in the image. We consider three t ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
This paper presents a novel approach to partsbased object recognition in the presence of occlusion. We focus on the problem of determining the pose of a 3D object from a single 2D image when convex parts of the object have been matched to corresponding regions in the image. We consider three types of occlusions: selfocclusion, occlusions whose locus is identified in the image, and completely arbitrary occlusions. We show that in the first two cases this is a convex optimization problem, derive efficient algorithms, and characterize their performance. For the last case, we prove that the problem of finding valid poses is computationally hard, but provide an efficient, approximate algorithm. This work generalizes our previous work on regionbased object recognition, which focused on the case of planar models. This research was supported by the Unites StatesIsrael Binational Science Foundation, Grant No. 94100. The vision group at the Weizmann Inst. is supported in part by...
3D to 2D Recognition with Regions
 IEEE Conference on Computer Vision and Pattern Recognition
, 1997
"... This paper presents a novel approach to partsbased object recognition in the presence of occlusion. We focus on the problem of determining the pose of a 3D object from a single 2D image when convex parts of the object have been matched to corresponding regions in the image. We consider three t ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This paper presents a novel approach to partsbased object recognition in the presence of occlusion. We focus on the problem of determining the pose of a 3D object from a single 2D image when convex parts of the object have been matched to corresponding regions in the image. We consider three types of occlusions: selfocclusion, occlusions whose locus is identified in the image, and completely arbitrary occlusions. We derive efficient algorithms for the first two cases, and characterize their performance. For the last case, we prove that the problem of finding valid poses is computationally hard, but provide an efficient, approximate algorithm. This work generalizes our previous work on regionbased object recognition, which focused on the case of planar models. A preliminary version of this paper has appeared in [29] A brief overview of these and related results has appeared in [8] y This research was supported by the Unites StatesIsrael Binational Science Foundation, Gr...
When Is It Possible to Identify 3D Objects from Single Images Using Class Constraints?
 J. of Comp. Vision
, 1999
"... One approach to recognizing objects seen from arbitrary viewpoint is by extracting invariant properties of the objects from single images. Such properties are found in images of 3D objects only when the objects are constrained to belong to certain classes (e.g., bilaterally symmetric objects). Exist ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
One approach to recognizing objects seen from arbitrary viewpoint is by extracting invariant properties of the objects from single images. Such properties are found in images of 3D objects only when the objects are constrained to belong to certain classes (e.g., bilaterally symmetric objects). Existing studies that follow this approach propose how to compute invariant representations for a handful of classes of objects. A fundamental question regarding the invariance approach is whether it can be applied to a wide range of classes. To answer this question it is essential to study the set of classes for which invariance exists. This paper introduces a new method for determining the existence of invariance for classes of objects together with the set of images from which these invariance can be computed. We develop algebraic tests that, given a class of objects undergoing affine projection, determine whether the objects in the class can be identified from single images. In addition, thes...
Robust and Efficient 3D Recognition by Alignment
, 1992
"... Alignment is a prevalent approach for recognizing threedimensional objects in twodimensional images. Current implementations handle errors that are inherent in images in ad hoc ways. This thesis shows that these errors can propagate and magnify through the alignment computations, such that the ad ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Alignment is a prevalent approach for recognizing threedimensional objects in twodimensional images. Current implementations handle errors that are inherent in images in ad hoc ways. This thesis shows that these errors can propagate and magnify through the alignment computations, such that the ad hoc approaches may not work. In addition, a technique is given for tightly bounding the propagated error, which can be used to make the recognition robust while still being efficient. Further, the error bounds can be used to formally compute the likelihood that a set of hypothesized matches between model and image features is correct. The technique for bounding the propagated error makes use of a new solution to a fundamental problem in computer recognition, namely, the solution for 3D pose from three corresponding points under weakperspective projection. The new solution is intended to provide a fast means of computing the error bounds. In deriving the new solution, this thesis gives a ge...
Error Propagation in 3Dfrom2D Recognition: ScaledOrthographic and Perspective Projections
 In Proceedings: ARPA Image Understanding Workshop
, 1994
"... Robust recognition systems require a careful understanding of the effects of error in sensed features. Error in these image features results in uncertainty in the possible image location of each additional model feature. We present an accurate, analytic approximation for this uncertainty when model ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Robust recognition systems require a careful understanding of the effects of error in sensed features. Error in these image features results in uncertainty in the possible image location of each additional model feature. We present an accurate, analytic approximation for this uncertainty when model poses are based on matching three image and model points. This result applies to objects that are fully threedimensional, where past results considered only twodimensional objects. Further, we introduce a linear programming algorithm to compute this uncertainty when poses are based on any number of initial matches. 1 Introduction Object recognition systems frequently hypothesize a known object's pose based on matching a small number of the object's features to features in the image (e.g., [7, 14, 17, 10]). To confirm the hypothesis, they commonly use the pose to look for additional matches. A fundamental question in building robust recognition systems is how noise in the matched image feat...