Results 1  10
of
38
Algebraic Functions For Recognition
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1994
"... In the general case, a trilinear relationship between three perspective views is shown to exist. The trilinearity result is shown to be of much practical use in visual recognition by alignment  yielding a direct reprojection method that cuts through the computations of camera transformation, sce ..."
Abstract

Cited by 147 (29 self)
 Add to MetaCart
In the general case, a trilinear relationship between three perspective views is shown to exist. The trilinearity result is shown to be of much practical use in visual recognition by alignment  yielding a direct reprojection method that cuts through the computations of camera transformation, scene structure and epipolar geometry. Moreover, the direct method is linear and sets a new lower theoretical bound on the minimal number of points that are required for a linear solution for the task of reprojection. The proof of the central result may be of further interest as it demonstrates certain regularities across homographies of the plane and introduces new view invariants. Experiments on simulated and real image data were conducted, including a comparative analysis with epipolar intersection and the linear combination methods, with results indicating a greater degree of robustness in practice and a higher level of performance in reprojection tasks. Keywords Visual Recognition, Al...
Relative 3D Reconstruction Using Multiple Uncalibrated Images
, 1995
"... In this paper, we show how relative 3D reconstruction from point correspondences of multiple uncalibrated images can be achieved through reference points. The original contributions with respect to related works in the field are mainly a direct global method for relative 3D reconstruction, and a geo ..."
Abstract

Cited by 90 (14 self)
 Add to MetaCart
In this paper, we show how relative 3D reconstruction from point correspondences of multiple uncalibrated images can be achieved through reference points. The original contributions with respect to related works in the field are mainly a direct global method for relative 3D reconstruction, and a geometrical method to select a correct set of reference points among all image points. Experimental results from both simulated and real image sequences are presented, and robustness of the method and reconstruction precision of the results are discussed. Key words: relative reconstruction, projective geometry, uncalibration, geometric interpretation 1 Introduction 1.1 Relative positioning From a single image, no depth can be computed without a priori information. Even more, no invariant can be computed from a general set of points as shown by Burns, Weiss and Riseman (1990). This problem becomes feasible using multiple images. The process is composed of two major steps. First, image feature...
Geometry and Photometry in 3D Visual Recognition
 PhD thesis, M.I.T Artificial Intelligence Laboratory
, 1992
"... This thesis addresses the problem of visual recognition under two sources of variability: geometric and photometric. The geometric deals with the relation between 3D objects and their views under parallel, perspective, and central projection. The photometric deals with the relation between 3D matte ..."
Abstract

Cited by 73 (9 self)
 Add to MetaCart
This thesis addresses the problem of visual recognition under two sources of variability: geometric and photometric. The geometric deals with the relation between 3D objects and their views under parallel, perspective, and central projection. The photometric deals with the relation between 3D matte objects and their images under changing illumination conditions. Taken together, an alignmentbased method is presented for recognizing objects viewed from arbitrary viewing positions and illuminated by arbitrary settings of light sources.
Projective Structure from Uncalibrated Images: Structure from Motion and Recognition
, 1994
"... We address the problem of reconstructing 3D space in a projective framework from two or more views, and the problem of artificially generating novel views of the scene from two given views (reprojection). We describe an invariance relation which provides a new description of structure, we call proj ..."
Abstract

Cited by 62 (14 self)
 Add to MetaCart
We address the problem of reconstructing 3D space in a projective framework from two or more views, and the problem of artificially generating novel views of the scene from two given views (reprojection). We describe an invariance relation which provides a new description of structure, we call projective depth, which is captured by a single equation relating image point correspondences across two or more views and the homographies of two arbitrary virtual planes. The framework is based on knowledge of correspondence of features across views, is linear, extremely simple, and the computations of structure readily extends to overdetermination using multiple views. Experimental results demonstrate a high degree of accuracy in both tasks  reconstruction and reprojection. KeywordsVisual Recognition, 3D Reconstruction from 2D Views, Projective Geometry, Algebraic and Geometric Invariants. I. Introduction The geometric relation between objects (or scenes) in the world and their imag...
Relative Affine Structure: Canonical Model for 3D from 2D Geometry and Applications
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1996
"... We propose an affine framework for perspective views, captured by a single extremely simple equation based on a viewercentered invariant we call relative affine structure. Via a number of corollaries of our main results we show that our framework unifies previous work  including Euclidean, projec ..."
Abstract

Cited by 59 (9 self)
 Add to MetaCart
We propose an affine framework for perspective views, captured by a single extremely simple equation based on a viewercentered invariant we call relative affine structure. Via a number of corollaries of our main results we show that our framework unifies previous work  including Euclidean, projective and affine  in a natural and simple way, and introduces new, extremely simple, algorithms for the tasks of reconstruction from multiple views, recognition by alignment, and certain image coding applications.
Relative Affine Structure: Theory and Application to 3D Reconstruction From Perspective Views
 In IEEE Conference on Computer Vision and Pattern Recognition
, 1994
"... We propose an affine framework for perspective views, captured by a single extremely simple equation based on a viewercentered invariant we call relative affine structure. Via a number of corollaries of our main results we show that our framework unifies previous work  including Euclidean, proje ..."
Abstract

Cited by 58 (12 self)
 Add to MetaCart
We propose an affine framework for perspective views, captured by a single extremely simple equation based on a viewercentered invariant we call relative affine structure. Via a number of corollaries of our main results we show that our framework unifies previous work  including Euclidean, projective and affine  in a natural and simple way. Finally, the main results were applied to a real image sequence for purpose of 3D reconstruction from 2D views. 1 Introduction The introduction of affine and projective tools into the field of computer vision have brought increased activity in the fields of structure from motion and recognition by alignment in the recent few years. The emerging realization is that nonmetric information, although weaker than the information provided by depth maps and rigid camera geometries, is nonetheless useful in the sense that the framework may provide simpler algorithms, camera calibration is not required, more freedom in picturetaking is allowed  ...
Linear subspace methods for recovering translation direction. Spatial Vision in Humans and Robots
, 1993
"... The image motion eld for an observer moving through a static environment depends on the observer's translational and rotational velocities along with the distances to surface points. Given such a motion eld as input we haverecently introduced subspace methods for the recovery of the observer's motio ..."
Abstract

Cited by 39 (1 self)
 Add to MetaCart
The image motion eld for an observer moving through a static environment depends on the observer's translational and rotational velocities along with the distances to surface points. Given such a motion eld as input we haverecently introduced subspace methods for the recovery of the observer's motion and the depth structure of the scene. This class of methods involve splitting the equations describing the motion eld into separate equations for the observer's translational direction, the rotational velocity, and the relative depths. The resulting equations can then be solved successively, beginning with the equations for the translational direction. Here we concentrate on this rst step. In earlier work, a linear method was shown to provide a biased estimate of the translational direction. We discuss the source of this bias and show howit can be e ectively removed. The consequence is that the observer's velocity and the relative depths to points in the scene can all be recovered by successively solving three linear problems.
Euclidian constraints for uncalibrated reconstruction
 In Proceedings Fourth International Conference on Computer Vision
, 1993
"... It is possible to recouer the threedimensional structure of a scene using images taken with ancalibrated cameras and pixel correspondences between these images. But such reconstruction can only be performed up to a projective transformation of the SD space. Therefore constraints have to be put on t ..."
Abstract

Cited by 39 (4 self)
 Add to MetaCart
It is possible to recouer the threedimensional structure of a scene using images taken with ancalibrated cameras and pixel correspondences between these images. But such reconstruction can only be performed up to a projective transformation of the SD space. Therefore constraints have to be put on the reconstructed data in order to gel the reconstruction in the euclidean space. Such constraints arise from knowledge of the scene: location of points, geometrical constraints on lines, etc. We discuss here the kind of constraints that have to be added and show how they can be fed in a general fmmework. Experimental results on real data prove the feasability, and experiments on simulated data address the accuracy of the results. 1
Recognition by Prototypes
 International Journal of Computer Vision
, 1992
"... A scheme for recognizing 3D objects from single 2D images is introduced. The scheme proceeds in two stages. In the first stage, the categorization stage, the image is compared to prototype objects. For each prototype, the view that most resembles the image is recovered, and, if the view is found t ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
A scheme for recognizing 3D objects from single 2D images is introduced. The scheme proceeds in two stages. In the first stage, the categorization stage, the image is compared to prototype objects. For each prototype, the view that most resembles the image is recovered, and, if the view is found to be similar to the image, the class identity of the object is determined. In the second stage, the identification stage, the observed object is compared to the individual models of its class, where classes are expected to contain objects with relatively similar shapes. For each model, a view that matches the image is sought.
Accurate Projective Reconstruction
, 1993
"... . It is possible to recover the threedimensional structure of a scene using images taken with uncalibrated cameras and pixel correspondences. But such a reconstruction can only be computed up to a projective transformation of the 3D space. Therefore, constraints have to be added to the reconstructe ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
. It is possible to recover the threedimensional structure of a scene using images taken with uncalibrated cameras and pixel correspondences. But such a reconstruction can only be computed up to a projective transformation of the 3D space. Therefore, constraints have to be added to the reconstructed data in order to get the reconstruction in the euclidean space. Such constraints arise from knowledge of the scene: location of points, geometrical constraints on lines, etc. We first discuss here the type of constraints that have to be added then we show how they can be fed into a general framework. Experiments prove that the accuracy needed for industrial applications is reachable when measurements in the image have subpixel accuracy. Therefore, we show how a real camera can be mapped into an accurate projective camera and how accurate point detection improve the reconstruction results. 1 Introduction One of the principal goals of research in computer vision is to enable machines to per...