Results 1  10
of
201
Bundle adjustment – a modern synthesis
 Vision Algorithms: Theory and Practice, LNCS
, 2000
"... This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics c ..."
Abstract

Cited by 386 (12 self)
 Add to MetaCart
This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics covered include: the choice of cost function and robustness; numerical optimization including sparse Newton methods, linearly convergent approximations, updating and recursive methods; gauge (datum) invariance; and quality control. The theory is developed for general robust cost functions rather than restricting attention to traditional nonlinear least squares.
Determining the Epipolar Geometry and its Uncertainty: A Review
 International Journal of Computer Vision
, 1998
"... Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, an ..."
Abstract

Cited by 320 (7 self)
 Add to MetaCart
Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, and its determination is very important in many applications such as scene modeling and vehicle navigation. This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and its uncertainty. A wellfounded measure is proposed to compare these techniques. Projective reconstruction is also reviewed. The software which we have developed for this review is available on the Internet.
Euclidean reconstruction from uncalibrated views
 Applications of Invariance in Computer Vision
, 1993
"... The possibility of calibrating a camera from image data alone, based on matched points identified in a series of images by a moving camera was suggested by Mayband and Faugeras. This result implies the possibility of Euclidean reconstruction from a series of images with a moving camera, or equivalen ..."
Abstract

Cited by 233 (14 self)
 Add to MetaCart
The possibility of calibrating a camera from image data alone, based on matched points identified in a series of images by a moving camera was suggested by Mayband and Faugeras. This result implies the possibility of Euclidean reconstruction from a series of images with a moving camera, or equivalently, Euclidean structurefrommotion from an uncalibrated camera. No tractable algorithm for implementing their methods for more than three images have been previously reported. This paper gives a practical algorithm for Euclidean reconstruction from several views with the same camera. The algorithm is demonstrated on synthetic and real data and is shown to behave very robustly in the presence of noise giving excellent calibration and reconstruction results. 1
A Factorization Based Algorithm for MultiImage Projective Structure and Motion
, 1996
"... . We propose a method for the recovery of projective shape and motion from multiple images of a scene by the factorization of a matrix containing the images of all points in all views. This factorization is only possible when the image points are correctly scaled. The major technical contribution of ..."
Abstract

Cited by 212 (15 self)
 Add to MetaCart
. We propose a method for the recovery of projective shape and motion from multiple images of a scene by the factorization of a matrix containing the images of all points in all views. This factorization is only possible when the image points are correctly scaled. The major technical contribution of this paper is a practical method for the recovery of these scalings, using only fundamental matrices and epipoles estimated from the image data. The resulting projective reconstruction algorithm runs quickly and provides accurate reconstructions. Results are presented for simulated and real images. 1 Introduction In the last few years, the geometric and algebraic relations between uncalibrated views have found lively interest in the computer vision community. A first key result states that, from two uncalibrated views, one can recover the 3D structure of a scene up to an unknown projective transformation [Fau92, HGC92]. The information one needs to do so is entirely contained in the fundam...
Canonic Representations for the Geometries of Multiple Projective Views
 Computer Vision and Image Understanding
, 1994
"... This work is in the context of motion and stereo analysis. It presents a new uni ed representation which will be useful when dealing with multiple views in the case of uncalibrated cameras. Several levels of information might be considered, depending on the availability of information. Among other t ..."
Abstract

Cited by 180 (8 self)
 Add to MetaCart
This work is in the context of motion and stereo analysis. It presents a new uni ed representation which will be useful when dealing with multiple views in the case of uncalibrated cameras. Several levels of information might be considered, depending on the availability of information. Among other things, an algebraic description of the epipolar geometry of N views is introduced, as well as a framework for camera selfcalibration, calibration updating, and structure from motion in an image sequence taken by a camera which is zooming and moving at the same time. We show how a special decomposition of a set of two or three general projection matrices, called canonical enables us to build geometric descriptions for a system of cameras which are invariant with respect to a given group of transformations. These representations are minimal and capture completely the properties of each level of description considered: Euclidean (in the context of calibration, and in the context of structure from motion, which we distinguish clearly), a ne, and projective, that we also relate to each other. In the last case, a new decomposition of the wellknown fundamental matrix is obtained. Dependencies, which appear when three or more views are available, are studied in the context of the canonic decomposition, and new composition formulas are established. The theory is illustrated by tutorial examples with real images.
An ImageBased Approach to ThreeDimensional Computer Graphics
, 1997
"... The conventional approach to threedimensional computer graphics produces images from geometric scene descriptions by simulating the interaction of light with matter. My research explores an alternative approach that replaces the geometric scene description with perspective images and replaces the s ..."
Abstract

Cited by 167 (4 self)
 Add to MetaCart
The conventional approach to threedimensional computer graphics produces images from geometric scene descriptions by simulating the interaction of light with matter. My research explores an alternative approach that replaces the geometric scene description with perspective images and replaces the simulation process with data interpolation. I derive an imagewarping equation that maps the visible points in a reference image to their correct positions in any desired view. This mapping from reference image to desired image is determined by the centerofprojection and pinholecamera model of the two images and by a generalized disparity value associated with each point in the reference image. This generalized disparity value, which represents the structure of the scene, can be determined from point correspondences between multiple reference images. The imagewarping equation alone is insufficient to synthesize desired images because multiple referenceimage points may map to a single point. I derive a new visibility algorithm that determines a drawing order for the image warp. This algorithm results in correct visibility for the desired image independent of the reference image’s contents. The utility of the imagebased approach can be enhanced with a more general pinholecamera
On the geometry and algebra of the point and line correspondences between N images
, 1995
"... We explore the geometric and algebraic relations that exist between correspondences of points and lines in an arbitrary number of images. We propose to use the formalism of the GrassmannCayley algebra as the simplest way to make both geometric and algebraic statements in a very synthetic and effect ..."
Abstract

Cited by 149 (6 self)
 Add to MetaCart
We explore the geometric and algebraic relations that exist between correspondences of points and lines in an arbitrary number of images. We propose to use the formalism of the GrassmannCayley algebra as the simplest way to make both geometric and algebraic statements in a very synthetic and effective way (i.e. allowing actual computation if needed). We have a fairly complete picture of the situation in the case of points: there are only three types of algebraic relations which are satisfied by the coordinates of the images of a 3D point: bilinear relations arising when we consider pairs of images among the N and which are the wellknown epipolar constraints, trilinear relations arising when we consider triples of images among the N , and quadrilinear relations arising when we consider fourtuples of images among the N . In the case of lines, we show how the traditional perspective projection equation can be suitably generalized and that in the case of three images there exist two in...
Algebraic Functions For Recognition
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1994
"... In the general case, a trilinear relationship between three perspective views is shown to exist. The trilinearity result is shown to be of much practical use in visual recognition by alignment  yielding a direct reprojection method that cuts through the computations of camera transformation, sce ..."
Abstract

Cited by 147 (29 self)
 Add to MetaCart
In the general case, a trilinear relationship between three perspective views is shown to exist. The trilinearity result is shown to be of much practical use in visual recognition by alignment  yielding a direct reprojection method that cuts through the computations of camera transformation, scene structure and epipolar geometry. Moreover, the direct method is linear and sets a new lower theoretical bound on the minimal number of points that are required for a linear solution for the task of reprojection. The proof of the central result may be of further interest as it demonstrates certain regularities across homographies of the plane and introduces new view invariants. Experiments on simulated and real image data were conducted, including a comparative analysis with epipolar intersection and the linear combination methods, with results indicating a greater degree of robustness in practice and a higher level of performance in reprojection tasks. Keywords Visual Recognition, Al...
Sequential updating of projective and affine structure from motion
 International Journal of Computer Vision
, 1997
"... A structure from motion algorithm is described which recovers structure and camera position, modulo a projective ambiguity. Camera calibration is not required, and camera parameters such as focal length can be altered freely during motion. The structure is updated sequentially over an image sequenc ..."
Abstract

Cited by 141 (4 self)
 Add to MetaCart
A structure from motion algorithm is described which recovers structure and camera position, modulo a projective ambiguity. Camera calibration is not required, and camera parameters such as focal length can be altered freely during motion. The structure is updated sequentially over an image sequence, in contrast to schemes which employ a batch process. A specialisation of the algorithm to recover structure and camera position modulo an affine transformation is described, together with a method to periodically update the affine coordinate frame to prevent drift over time. We describe the constraint used to obtain this specialisation. Structure is recovered from image corners detected and matched automatically and reliably in real image sequences. Results are shown for reference objects and indoor environments, and accuracy of recovered structure is fully evaluated and compared for a number of reconstruction schemes. A specific application of the work is demonstrated  affine structure is used to compute free space maps enabling navigation through unstructured environments and avoidance of obstacles. The path planning involves only affine constructions.
Linear Pushbroom Cameras
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1994
"... Modelling th# push broom sensors commonly used in satellite imagery is quite di#cult and computationally intensive due to th# complicated motion ofth# orbiting satellite with respect to th# rotating earth# In addition, th# math#46 tical model is quite complex, involving orbital dynamics, andh#(0k is ..."
Abstract

Cited by 140 (6 self)
 Add to MetaCart
Modelling th# push broom sensors commonly used in satellite imagery is quite di#cult and computationally intensive due to th# complicated motion ofth# orbiting satellite with respect to th# rotating earth# In addition, th# math#46 tical model is quite complex, involving orbital dynamics, andh#(0k is di#cult to analyze. Inth#A paper, a simplified model of apush broom sensor(th# linear push broom model) is introduced. Ith as th e advantage of computational simplicity wh#A9 atth# same time giving very accurate results compared with th# full orbitingpush broom model. Meth# ds are given for solving th# major standardph# togrammetric problems for th e linear push broom sensor. Simple noniterative solutions are given for th# following problems : computation of th# model parameters from groundcontrol points; determination of relative model parameters from image correspondences between two images; scene reconstruction given image correspondences and groundcontrol points. In addition, th# linearpush broom model leads toth#0 retical insigh ts th# t will be approximately valid for th# full model as well.Th# epipolar geometry of linear push broom cameras in investigated and sh own to be totally di#erent from th at of a perspective camera. Neverth eless, a matrix analogous to th e essential matrix of perspective cameras issh own to exist for linear push broom sensors. Fromth#0 it is sh# wn th# t a scene is determined up to an a#ne transformation from two viewswith linearpush broom cameras. Keywords :push broom sensor, satellite image, essential matrixph# togrammetry, camera model The research describ ed in this paper hasb een supportedb y DARPA Contract #MDA97291 C0053 1 Real Push broom sensors are commonly used in satellite cameras, notably th# SPOT satellite forth# generatio...