Results 1  10
of
162
Determining the Epipolar Geometry and its Uncertainty: A Review
 International Journal of Computer Vision
, 1998
"... Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two i ..."
Abstract

Cited by 400 (9 self)
 Add to MetaCart
(Show Context)
Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3&times;3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, and its determination is very important in many applications such as scene modeling and vehicle navigation. This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and its uncertainty. A wellfounded measure is proposed to compare these techniques. Projective reconstruction is also reviewed. The software which we have developed for this review is available on the Internet.
A Factorization Based Algorithm for MultiImage Projective Structure and Motion
, 1996
"... . We propose a method for the recovery of projective shape and motion from multiple images of a scene by the factorization of a matrix containing the images of all points in all views. This factorization is only possible when the image points are correctly scaled. The major technical contribution of ..."
Abstract

Cited by 268 (16 self)
 Add to MetaCart
. We propose a method for the recovery of projective shape and motion from multiple images of a scene by the factorization of a matrix containing the images of all points in all views. This factorization is only possible when the image points are correctly scaled. The major technical contribution of this paper is a practical method for the recovery of these scalings, using only fundamental matrices and epipoles estimated from the image data. The resulting projective reconstruction algorithm runs quickly and provides accurate reconstructions. Results are presented for simulated and real images. 1 Introduction In the last few years, the geometric and algebraic relations between uncalibrated views have found lively interest in the computer vision community. A first key result states that, from two uncalibrated views, one can recover the 3D structure of a scene up to an unknown projective transformation [Fau92, HGC92]. The information one needs to do so is entirely contained in the fundam...
Autocalibration and the absolute quadric
 in Proc. IEEE Conf. Computer Vision, Pattern Recognition
, 1997
"... We describe a new method for camera autocalibration and scaled Euclidean structure and motion, from three or more views taken by a moving camera with fixed but unknown intrinsic parameters. The motion constancy of these is used to rectify an initial projective reconstruction. Euclidean scene structu ..."
Abstract

Cited by 254 (7 self)
 Add to MetaCart
(Show Context)
We describe a new method for camera autocalibration and scaled Euclidean structure and motion, from three or more views taken by a moving camera with fixed but unknown intrinsic parameters. The motion constancy of these is used to rectify an initial projective reconstruction. Euclidean scene structure is formulated in terms of the absolute quadric — the singular dual 3D quadric ( rank 3 matrix) giving the Euclidean dotproduct between plane normals. This is equivalent to the traditional absolute conic but simpler to use. It encodes both affine and Euclidean structure, and projects very simply to the dual absolute image conic which encodes camera calibration. Requiring the projection to be constant gives a bilinear constraint between the absolute quadric and image conic, from which both can be recovered nonlinearly from images, or quasilinearly from. Calibration and Euclidean structure follow easily. The nonlinear method is stabler, faster, more accurate and more general than the quasilinear one. It is based on a general constrained optimization technique — sequential quadratic programming — that may well be useful in other vision problems.
3D Model Acquisition from Extended Image Sequences
, 1995
"... This paper describes the extraction of 3D geometrical data from image sequences, for the purpose of creating 3D models of objects in the world. The approach is uncalibrated  camera internal parameters and camera motion are not known or required. Processing an image sequence is underpinned by token ..."
Abstract

Cited by 239 (29 self)
 Add to MetaCart
This paper describes the extraction of 3D geometrical data from image sequences, for the purpose of creating 3D models of objects in the world. The approach is uncalibrated  camera internal parameters and camera motion are not known or required. Processing an image sequence is underpinned by token correspondences between images. We utilise matching techniques which are both robust (detecting and discarding mismatches) and fully automatic. The matched tokens are used to compute 3D structure, which is initialised as it appears and then recursively updated over time. We describe a novel robust estimator of the trifocal tensor, based on a minimum number of token correspondences across an image triplet; and a novel tracking algorithm in which corners and line segments are matched over image triplets in an integrated framework. Experimental results are provided for a variety of scenes, including outdoor scenes taken with a handheld camcorder. Quantitative statistics are included to asses...
Linear Pushbroom Cameras
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1994
"... Modelling th# push broom sensors commonly used in satellite imagery is quite di#cult and computationally intensive due to th# complicated motion ofth# orbiting satellite with respect to th# rotating earth# In addition, th# math#46 tical model is quite complex, involving orbital dynamics, andh#(0k is ..."
Abstract

Cited by 172 (6 self)
 Add to MetaCart
(Show Context)
Modelling th# push broom sensors commonly used in satellite imagery is quite di#cult and computationally intensive due to th# complicated motion ofth# orbiting satellite with respect to th# rotating earth# In addition, th# math#46 tical model is quite complex, involving orbital dynamics, andh#(0k is di#cult to analyze. Inth#A paper, a simplified model of apush broom sensor(th# linear push broom model) is introduced. Ith as th e advantage of computational simplicity wh#A9 atth# same time giving very accurate results compared with th# full orbitingpush broom model. Meth# ds are given for solving th# major standardph# togrammetric problems for th e linear push broom sensor. Simple noniterative solutions are given for th# following problems : computation of th# model parameters from groundcontrol points; determination of relative model parameters from image correspondences between two images; scene reconstruction given image correspondences and groundcontrol points. In addition, th# linearpush broom model leads toth#0 retical insigh ts th# t will be approximately valid for th# full model as well.Th# epipolar geometry of linear push broom cameras in investigated and sh own to be totally di#erent from th at of a perspective camera. Neverth eless, a matrix analogous to th e essential matrix of perspective cameras issh own to exist for linear push broom sensors. Fromth#0 it is sh# wn th# t a scene is determined up to an a#ne transformation from two viewswith linearpush broom cameras. Keywords :push broom sensor, satellite image, essential matrixph# togrammetry, camera model The research describ ed in this paper hasb een supportedb y DARPA Contract #MDA97291 C0053 1 Real Push broom sensors are commonly used in satellite cameras, notably th# SPOT satellite forth# generatio...
Robust Parameterization and Computation of the Trifocal Tensor
 Image and Vision Computing
, 1997
"... The constraint that rigid motion places on the image positions of points and lines over three views is captured by the trifocal tensor. This paper demonstrates a novel robust estimator of the trifocal tensor, based on a minimum number of correspondences across an image triplet. In addition, it i ..."
Abstract

Cited by 126 (25 self)
 Add to MetaCart
(Show Context)
The constraint that rigid motion places on the image positions of points and lines over three views is captured by the trifocal tensor. This paper demonstrates a novel robust estimator of the trifocal tensor, based on a minimum number of correspondences across an image triplet. In addition, it is shown how the robust estimate can be used to find a minimal parameterization that enforces the constraints between the elements of the tensor. The matching techniques used to estimate the tensor are both robust (detecting and discarding mismatches) and fully automatic. Results are given for real image sequences. 1 Introduction The trifocal tensor plays a similar role for three views to that played by the fundamental matrix for two. It encapsulates all the (projective) geometric constraints between three views that are independent of scene structure. The tensor only depends on the motion between views and the internal parameters of the cameras, but it can be computed from image corre...
Factorization methods for projective structure and motion
 In IEEE Conf. Computer Vision & Pattern Recognition
, 1996
"... This paper describes a family of factorizationbased algorithms that recover 3D projective structure and motion from multiple uncalibrated perspective images of 3D points and lines. They can be viewed as generalizations of the TomasiKanade algorithm from affine to fully perspective cameras, and fro ..."
Abstract

Cited by 116 (5 self)
 Add to MetaCart
(Show Context)
This paper describes a family of factorizationbased algorithms that recover 3D projective structure and motion from multiple uncalibrated perspective images of 3D points and lines. They can be viewed as generalizations of the TomasiKanade algorithm from affine to fully perspective cameras, and from points to lines. They make no restrictive assumptions about scene or camera geometry, and unlike most existing reconstruction methods they do not rely on ‘privileged’ points or images. All of the available image data is used, and each feature in each image is treated uniformly. The key to projective factorization is the recovery of a consistent set of projective depths (scale factors) for the image points: this is done using fundamental matrices and epipoles estimated from the image data. We compare the performance of the new techniques with several existing ones, and also describe an approximate factorization method that gives similar results to SVDbased factorization, but runs much more quickly for large problems.
Novel View Synthesis in Tensor Space
 In Proc. of IEEE Conference on Computer Vision and Pattern Recognition
, 1997
"... We present a new method for synthesizing novel views of a 3D scene from few model images in full correspondence. The core of this work is the derivation of a tensorial operator that describes the transformation from a given tensor of three views to a novel tensor of a new configuration of three view ..."
Abstract

Cited by 115 (8 self)
 Add to MetaCart
(Show Context)
We present a new method for synthesizing novel views of a 3D scene from few model images in full correspondence. The core of this work is the derivation of a tensorial operator that describes the transformation from a given tensor of three views to a novel tensor of a new configuration of three views. By repeated application of the operator on a seed tensor with a sequence of desired virtual camera positions we obtain a chain of warping functions (tensors) from the set of model images to create the desired virtual views. 1. Introduction This paper addresses the problem of synthesizing a novel image, from an arbitrary viewing position, given a small number of model images (registered by means of an opticflow engine) of the 3D scene. The most significant aspect of our approach is the ability to synthesize images that are far away from the viewing positions of the sample model images without ever computing explicitly any 3D information about the scene. This property provides a multiimag...
Lines and Point in Three Views and the Trifocal Tensor
, 1997
"... This paper disc#274# the basic role of the trifoc al tensor insc#37 rec# nstr uc#r# n from three views. This 3 3 tensor plays a role in the analysis of sc#422 from three views analogous to the role played by the fundamental matrix in the twoviewc ase. In partic ular, the trifoc al tensor may ..."
Abstract

Cited by 91 (2 self)
 Add to MetaCart
This paper disc#274# the basic role of the trifoc al tensor insc#37 rec# nstr uc#r# n from three views. This 3 3 tensor plays a role in the analysis of sc#422 from three views analogous to the role played by the fundamental matrix in the twoviewc ase. In partic ular, the trifoc al tensor may bec omputed by a linear algorithm from a set of 13 linec orrespondenc#3 in three views. It is further shown in this paper, that the trifoc al tensor is essentially identic## to a set ofc oe#c#99 ts introduc#5 by Shashua toe#ec# point transfer in the three viewc##22 This observation means that the 13line algorithm may be extended to allow for thec omputation of the trifoc al tensor given any mixture of su#c#36 tly many line and pointc orrespondenc#9# From the trifoc al tensor thec amera matric## of the images may be c#25371# and the sc#35 may berec#31#41562# For unrelatedunc# libratedc ameras, this rec# nstr uc#r# n will be unique up to projec#939# y. Thus, projec#61 e rec#376#39162 of a set of lines and points may bec#40940 out linearly from three views.
Affine Structure from Line Correspondences with Uncalibrated Affine Cameras
 IEEE Trans. Pattern Analysis and Machine Intelligence
, 1997
"... This paper presents a linear algorithm for recovering 3D affine shape and motion from line correspondences with uncalibrated affine cameras. The algorithm requires a minimum of seven line correspondences over three views. The key idea is the introduction of a onedimensional projective camera. This ..."
Abstract

Cited by 83 (9 self)
 Add to MetaCart
(Show Context)
This paper presents a linear algorithm for recovering 3D affine shape and motion from line correspondences with uncalibrated affine cameras. The algorithm requires a minimum of seven line correspondences over three views. The key idea is the introduction of a onedimensional projective camera. This converts 3D affine reconstruction of "line directions" into 2D projective reconstruction of "points". In addition, a linebased factorisation method is also proposed to handle redundant views. Experimental results both on simulated and real image sequences validate the robustness and the accuracy of the algorithm.