Results 1  10
of
42
Recursive estimation of motion, structure, and focal length
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 1995
"... We present a formulation for recursive recovery of motion, pointwise structure, and focal length from feature correspondences tracked through an image sequence. In addition to adding focal length to the state vector, several representational improvements are made over earlier structure from motion ..."
Abstract

Cited by 288 (11 self)
 Add to MetaCart
We present a formulation for recursive recovery of motion, pointwise structure, and focal length from feature correspondences tracked through an image sequence. In addition to adding focal length to the state vector, several representational improvements are made over earlier structure from motion formulations, yielding a stable and accurate estimation framework which applies uniformly to both true perspective and orthographic projection. Results on synthetic and real imagery illustrate the performance of the estimator.
The Fundamental matrix: theory, algorithms, and stability analysis
 International Journal of Computer Vision
, 1995
"... In this paper we analyze in some detail the geometry of a pair of cameras, i.e. a stereo rig. Contrarily to what has been done in the past and is still done currently, for example in stereo or motion analysis, we do not assume that the intrinsic parameters of the cameras are known (coordinates of th ..."
Abstract

Cited by 263 (13 self)
 Add to MetaCart
In this paper we analyze in some detail the geometry of a pair of cameras, i.e. a stereo rig. Contrarily to what has been done in the past and is still done currently, for example in stereo or motion analysis, we do not assume that the intrinsic parameters of the cameras are known (coordinates of the principal points, pixels aspect ratio and focal lengths). This is important for two reasons. First, it is more realistic in applications where these parameters may vary according to the task (active vision). Second, the general case considered here, captures all the relevant information that is necessary for establishing correspondences between two pairs of images. This information is fundamentally projective and is hidden in a confusing manner in the commonly used formalism of the Essential matrix introduced by LonguetHiggins [40]. This paper clarifies the projective nature of the correspondence problem in stereo and shows that the epipolar geometry can be summarized in one 3 \Theta 3 ma...
An ImageBased Approach to ThreeDimensional Computer Graphics
, 1997
"... The conventional approach to threedimensional computer graphics produces images from geometric scene descriptions by simulating the interaction of light with matter. My research explores an alternative approach that replaces the geometric scene description with perspective images and replaces the s ..."
Abstract

Cited by 195 (4 self)
 Add to MetaCart
The conventional approach to threedimensional computer graphics produces images from geometric scene descriptions by simulating the interaction of light with matter. My research explores an alternative approach that replaces the geometric scene description with perspective images and replaces the simulation process with data interpolation. I derive an imagewarping equation that maps the visible points in a reference image to their correct positions in any desired view. This mapping from reference image to desired image is determined by the centerofprojection and pinholecamera model of the two images and by a generalized disparity value associated with each point in the reference image. This generalized disparity value, which represents the structure of the scene, can be determined from point correspondences between multiple reference images. The imagewarping equation alone is insufficient to synthesize desired images because multiple referenceimage points may map to a single point. I derive a new visibility algorithm that determines a drawing order for the image warp. This algorithm results in correct visibility for the desired image independent of the reference image’s contents. The utility of the imagebased approach can be enhanced with a more general pinholecamera
Canonical representations for the geometries of multiple projective views, Comput. Vision and Image Understanding 64 (2
, 1996
"... ..."
(Show Context)
Algebraic Functions For Recognition
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1994
"... In the general case, a trilinear relationship between three perspective views is shown to exist. The trilinearity result is shown to be of much practical use in visual recognition by alignment  yielding a direct reprojection method that cuts through the computations of camera transformation, sce ..."
Abstract

Cited by 157 (29 self)
 Add to MetaCart
(Show Context)
In the general case, a trilinear relationship between three perspective views is shown to exist. The trilinearity result is shown to be of much practical use in visual recognition by alignment  yielding a direct reprojection method that cuts through the computations of camera transformation, scene structure and epipolar geometry. Moreover, the direct method is linear and sets a new lower theoretical bound on the minimal number of points that are required for a linear solution for the task of reprojection. The proof of the central result may be of further interest as it demonstrates certain regularities across homographies of the plane and introduces new view invariants. Experiments on simulated and real image data were conducted, including a comparative analysis with epipolar intersection and the linear combination methods, with results indicating a greater degree of robustness in practice and a higher level of performance in reprojection tasks. Keywords Visual Recognition, Al...
What Can Two Images Tell Us About a Third One?
 International Journal of Computer Vision
, 1996
"... : This paper discusses the problem of predicting image features in an image from image features in two other images and the epipolar geometry between the three images. We adopt the most general camera model of perpective projection and show that a point can be predicted in the third image as a bilin ..."
Abstract

Cited by 115 (5 self)
 Add to MetaCart
(Show Context)
: This paper discusses the problem of predicting image features in an image from image features in two other images and the epipolar geometry between the three images. We adopt the most general camera model of perpective projection and show that a point can be predicted in the third image as a bilinear function of its images in the first two cameras, that the tangents to three corresponding curves are related by a trilinear function, and that the curvature of a curve in the third image is a linear function of the curvatures at the corresponding points in the other two images. Our analysis relies heavily on the use of the fundamental matrix which has been recently introduced [7] and on the properties of a special plane which we call the trifocal plane. We thus answer completely the following question: given two views of an object, what would a third view look like? the question and its answer bear upon several areas of computer vision, stereo, motion analysis, and modelbased object re...
3D Scene Representation as a Collection of Images and Fundamental Matrices
, 1994
"... : In this report, we address the problem of the prediction of new views of a given scene from existing weakly or fully calibrated views called reference views. Our method does not make use of a threedimensional model of the scene, but of the existing relations between the images. The new views are ..."
Abstract

Cited by 77 (0 self)
 Add to MetaCart
(Show Context)
: In this report, we address the problem of the prediction of new views of a given scene from existing weakly or fully calibrated views called reference views. Our method does not make use of a threedimensional model of the scene, but of the existing relations between the images. The new views are represented in the reference views by a viewpoint and a retinal plane, i.e. by four points which can be chosen interactively. From this representation and from the constraints between the images, we derive an algorithm to predict the new views. We discuss the advantages of this method compared to the commonly used scheme : 3D reconstructionprojection. We show some experimental results with synthetic and real data. Keywords: 3D scene representation, multiview stereo, image synthesis (R'esum'e : tsvp) This work was partially supported by DRET contract No 91815/DRET/EAR and by the EEC under Esprit project 6448, Viva Unite de recherche INRIA SophiaAntipolis 2004 route des Lucioles, BP 9...
Recovery of EgoMotion Using Image Stabilization
, 1994
"... A method for computing the 3D camera motion #the egomotion# in a static scene is introduced, which is based on computing the 2D image motion of a single image region directly from image intensities. The computed image motion of this image region is used to register the images so that the detected i ..."
Abstract

Cited by 72 (9 self)
 Add to MetaCart
(Show Context)
A method for computing the 3D camera motion #the egomotion# in a static scene is introduced, which is based on computing the 2D image motion of a single image region directly from image intensities. The computed image motion of this image region is used to register the images so that the detected image region appears stationary. The resulting displacement #eld for the entire scene between the registered frames is affected only by the 3D translation of the camera. After canceling the e#ects of the camera rotation by using such 2D image registration, the 3D camera translation is computed by #nding the focusofexpansion in the translationonly set of registered frames. This step is followed by computing the camera rotation to complete the computation of the egomotion.
Projective Structure from Uncalibrated Images: Structure from Motion and Recognition
, 1994
"... We address the problem of reconstructing 3D space in a projective framework from two or more views, and the problem of artificially generating novel views of the scene from two given views (reprojection). We describe an invariance relation which provides a new description of structure, we call proj ..."
Abstract

Cited by 67 (14 self)
 Add to MetaCart
We address the problem of reconstructing 3D space in a projective framework from two or more views, and the problem of artificially generating novel views of the scene from two given views (reprojection). We describe an invariance relation which provides a new description of structure, we call projective depth, which is captured by a single equation relating image point correspondences across two or more views and the homographies of two arbitrary virtual planes. The framework is based on knowledge of correspondence of features across views, is linear, extremely simple, and the computations of structure readily extends to overdetermination using multiple views. Experimental results demonstrate a high degree of accuracy in both tasks  reconstruction and reprojection. KeywordsVisual Recognition, 3D Reconstruction from 2D Views, Projective Geometry, Algebraic and Geometric Invariants. I. Introduction The geometric relation between objects (or scenes) in the world and their imag...
Recovery of EgoMotion Using Region Alignment
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 1997
"... A method for computing the 3D camera motion (the egomotion) in a static scene is described, where initially a detected 2D motion between two frames is used to align corresponding image regions. We prove that such a 2D registration removes all effects of camera rotation, even for those image regions ..."
Abstract

Cited by 65 (8 self)
 Add to MetaCart
(Show Context)
A method for computing the 3D camera motion (the egomotion) in a static scene is described, where initially a detected 2D motion between two frames is used to align corresponding image regions. We prove that such a 2D registration removes all effects of camera rotation, even for those image regions that remain misaligned. The resulting residual parallax displacement field between the two regionaligned images is an epipolar field centered at the FOE (Focusof Expansion). The 3D camera translation is recovered from the epipolar field. The 3D camera rotation is recovered from the computed 3D translation and the detected 2D motion. The decomposition of image motion into a 2D parametric motion and residual epipolar parallax displacements avoids many of the inherent ambiguities and instabilities associated with decomposing the image motion into its rotational and translational components, and hence makes the computation of egomotion or 3D structure estimation more robust.