Results 1  10
of
36
Determining the Epipolar Geometry and its Uncertainty: A Review
 International Journal of Computer Vision
, 1998
"... Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, an ..."
Abstract

Cited by 319 (7 self)
 Add to MetaCart
Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, and its determination is very important in many applications such as scene modeling and vehicle navigation. This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and its uncertainty. A wellfounded measure is proposed to compare these techniques. Projective reconstruction is also reviewed. The software which we have developed for this review is available on the Internet.
The Fundamental matrix: theory, algorithms, and stability analysis
 International Journal of Computer Vision
, 1995
"... In this paper we analyze in some detail the geometry of a pair of cameras, i.e. a stereo rig. Contrarily to what has been done in the past and is still done currently, for example in stereo or motion analysis, we do not assume that the intrinsic parameters of the cameras are known (coordinates of th ..."
Abstract

Cited by 232 (14 self)
 Add to MetaCart
In this paper we analyze in some detail the geometry of a pair of cameras, i.e. a stereo rig. Contrarily to what has been done in the past and is still done currently, for example in stereo or motion analysis, we do not assume that the intrinsic parameters of the cameras are known (coordinates of the principal points, pixels aspect ratio and focal lengths). This is important for two reasons. First, it is more realistic in applications where these parameters may vary according to the task (active vision). Second, the general case considered here, captures all the relevant information that is necessary for establishing correspondences between two pairs of images. This information is fundamentally projective and is hidden in a confusing manner in the commonly used formalism of the Essential matrix introduced by LonguetHiggins [40]. This paper clarifies the projective nature of the correspondence problem in stereo and shows that the epipolar geometry can be summarized in one 3 \Theta 3 ma...
Canonic Representations for the Geometries of Multiple Projective Views
 Computer Vision and Image Understanding
, 1994
"... This work is in the context of motion and stereo analysis. It presents a new uni ed representation which will be useful when dealing with multiple views in the case of uncalibrated cameras. Several levels of information might be considered, depending on the availability of information. Among other t ..."
Abstract

Cited by 178 (8 self)
 Add to MetaCart
This work is in the context of motion and stereo analysis. It presents a new uni ed representation which will be useful when dealing with multiple views in the case of uncalibrated cameras. Several levels of information might be considered, depending on the availability of information. Among other things, an algebraic description of the epipolar geometry of N views is introduced, as well as a framework for camera selfcalibration, calibration updating, and structure from motion in an image sequence taken by a camera which is zooming and moving at the same time. We show how a special decomposition of a set of two or three general projection matrices, called canonical enables us to build geometric descriptions for a system of cameras which are invariant with respect to a given group of transformations. These representations are minimal and capture completely the properties of each level of description considered: Euclidean (in the context of calibration, and in the context of structure from motion, which we distinguish clearly), a ne, and projective, that we also relate to each other. In the last case, a new decomposition of the wellknown fundamental matrix is obtained. Dependencies, which appear when three or more views are available, are studied in the context of the canonic decomposition, and new composition formulas are established. The theory is illustrated by tutorial examples with real images.
In Defense of the EightPoint Algorithm
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1997
"... Abstract—The fundamental matrix is a basic tool in the analysis of scenes taken with two uncalibrated cameras, and the eightpoint algorithm is a frequently cited method for computing the fundamental matrix from a set of eight or more point matches. It has the advantage of simplicity of implementati ..."
Abstract

Cited by 132 (1 self)
 Add to MetaCart
Abstract—The fundamental matrix is a basic tool in the analysis of scenes taken with two uncalibrated cameras, and the eightpoint algorithm is a frequently cited method for computing the fundamental matrix from a set of eight or more point matches. It has the advantage of simplicity of implementation. The prevailing view is, however, that it is extremely susceptible to noise and hence virtually useless for most purposes. This paper challenges that view, by showing that by preceding the algorithm with a very simple normalization (translation and scaling) of the coordinates of the matched points, results are obtained comparable with the best iterative algorithms. This improved performance is justified by theory and verified by extensive experiments on real images. Index Terms—Fundamental matrix, eightpoint algorithm, condition number, epipolar structure, stereo vision.
In Defence of the 8point Algorithm
"... The fundamental matrix is a basic tool in the analysis of scenes taken with two uncalibrated cameras, and the 8point algoritm is a frequent#e cit#3 met#9 d for comput#10 t he fundament al ma t# ix from a set of 8 or more point mat ches. It hast he advant age of simplicit y of implement at ion. The ..."
Abstract

Cited by 131 (3 self)
 Add to MetaCart
The fundamental matrix is a basic tool in the analysis of scenes taken with two uncalibrated cameras, and the 8point algoritm is a frequent#e cit#3 met#9 d for comput#10 t he fundament al ma t# ix from a set of 8 or more point mat ches. It hast he advant age of simplicit y of implement at ion. The prevailing view is, however,t#(9 it isext#3791( suscept#43 t o noise and hence virtually useless for most purposes. This paper challengest#en view, by showing t#ng by precedingt he algorit hm wit h a very simple normalizat ion(t ranslat ion and scaling) oft he coordinat es oft he mat ched point#( result# are obt# ined comparable wit# t he best it## at ive algorit#209 This improved performance is just#690 byt#1082 and verified byext#259( e experiment s on real images.
Selfcalibration of stationary cameras
 International Journal of Computer Vision
, 1997
"... A new practical method is given for the selfcalibration of a camera. In this method, at least three images are taken from the same point in space with different orientations of the camera and calibration is computed from an analysis of point matches between the images. The method requires no knowle ..."
Abstract

Cited by 100 (1 self)
 Add to MetaCart
A new practical method is given for the selfcalibration of a camera. In this method, at least three images are taken from the same point in space with different orientations of the camera and calibration is computed from an analysis of point matches between the images. The method requires no knowledge of the orientations of the camera. Calibration is based on the image correspondences only. This method differs fundamentally from previous results by Maybank and Faugeras on selfcalibration using the epipolar structure of image pairs. In the method of this paper, there is no epipolar structure since all images are taken from the same point in space, and so Maybank and Faugeras’s method does not apply. Since the images are all taken from the same point in space, determination of point matches is considerably easier than for images taken with a moving camera, since problems of occlusion or change of aspect or illumination do not occur. A noniterative calibration algorithm is given that works with any number of images. An iterative refinement method that may be used with noisy data is also described. The algorithm is implemented and validated on several sets of synthetic and real image data.
A unifying framework for structure and motion recovery from image sequences
 In Proc. 5th Int'l Conf. on Computer Vision
, 1995
"... This paper proposes a statistical framework that enables 3D structure and motion to be computed optimally from an image sequence, on the assumption that feature measurement errors are independent and Gaussian distributed. The analysis and results demonstrate that computing both camera/scene motion a ..."
Abstract

Cited by 73 (10 self)
 Add to MetaCart
This paper proposes a statistical framework that enables 3D structure and motion to be computed optimally from an image sequence, on the assumption that feature measurement errors are independent and Gaussian distributed. The analysis and results demonstrate that computing both camera/scene motion and 3D structure is essential to computing either with any accuracy. Having computed optimal estimates of structure and motion over a small number of initial images, a recursive version of the algorithm (previously reported) recomputes suboptimal estimates given new image data. The algorithm is designed explicitly for realtime implementation, and the complexity is proportional to the number of tracked features. 3D projective, affine and Euclidean models of structure and motion recovery have been implemented, incorporating both point and line features into the computation. The framework can handle any feature type and camera model that may be encapsulated as a projection equation from scene to image. 1
Mobile Robot Navigation Using Active Vision
, 1999
"... Active cameras provide a navigating vehicle with the ability to fixate and track features over extended periods of time, and wide fields of view. While it is relatively straightforward to apply fixating vision to tactical, shortterm navigation tasks, using serial fixation on a succession of feature ..."
Abstract

Cited by 60 (6 self)
 Add to MetaCart
Active cameras provide a navigating vehicle with the ability to fixate and track features over extended periods of time, and wide fields of view. While it is relatively straightforward to apply fixating vision to tactical, shortterm navigation tasks, using serial fixation on a succession of features to provide global information for strategic navigation is more involved. However, active vision is seemingly wellsuited to this task: the ability to measure features over such a wide range means that the same ones can be used as a robot makes a wide range of movements. This has advantages for mapbuilding and localisation. The core work of this thesis concerns simultaneous localisation and mapbuilding for a robot with a stereo active head, operating in an unknown environment and using point features in the world as visual landmarks. Importance has been attached to producing maps which are useful for extended periods of navigation. Many mapbuilding methods fail on extended runs because ...
3D Structure from 2D Motion
 IEEE Signal Processing Magazine
, 1999
"... this paper to delve into this formalism, further reading can be found in [41] [45]. In the following, we shall discuss its practical implementation and implications in the SfM techniques that have adopted it. ..."
Abstract

Cited by 51 (1 self)
 Add to MetaCart
this paper to delve into this formalism, further reading can be found in [41] [45]. In the following, we shall discuss its practical implementation and implications in the SfM techniques that have adopted it.
Goaldirected Video Metrology
 Proc. 4th European Conf. on Computer Vision
, 1996
"... . We investigate the general problem of accurate metrology from uncalibrated video sequences where only partial information is available. We show, via a specific example  plotting the position of a goalbound soccer ball  that accurate measurements can be obtained, and that both qualitative and ..."
Abstract

Cited by 40 (5 self)
 Add to MetaCart
. We investigate the general problem of accurate metrology from uncalibrated video sequences where only partial information is available. We show, via a specific example  plotting the position of a goalbound soccer ball  that accurate measurements can be obtained, and that both qualitative and quantitative questions about the data can be answered. From two video sequences of an incident captured from different viewpoints, we compute a novel (overhead) view using pairs of corresponding images. Using projective constructs we determine the point at which the vertical line through the ball pierces the ground plane in each frame. Throughout we take care to consider possible sources of error and show how these may be eliminated, neglected, or we derive appropriate uncertainty measures which are propagated via a firstorder analysis. 1 Introduction The 1966 World Cup Final at Wembley Stadium, between England and West Germany, produced what is arguably the best known and most controversi...