Results 1  10
of
53
Determining the Epipolar Geometry and its Uncertainty: A Review
 International Journal of Computer Vision
, 1998
"... Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, an ..."
Abstract

Cited by 320 (7 self)
 Add to MetaCart
Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, and its determination is very important in many applications such as scene modeling and vehicle navigation. This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and its uncertainty. A wellfounded measure is proposed to compare these techniques. Projective reconstruction is also reviewed. The software which we have developed for this review is available on the Internet.
A Factorization Based Algorithm for MultiImage Projective Structure and Motion
, 1996
"... . We propose a method for the recovery of projective shape and motion from multiple images of a scene by the factorization of a matrix containing the images of all points in all views. This factorization is only possible when the image points are correctly scaled. The major technical contribution of ..."
Abstract

Cited by 212 (15 self)
 Add to MetaCart
. We propose a method for the recovery of projective shape and motion from multiple images of a scene by the factorization of a matrix containing the images of all points in all views. This factorization is only possible when the image points are correctly scaled. The major technical contribution of this paper is a practical method for the recovery of these scalings, using only fundamental matrices and epipoles estimated from the image data. The resulting projective reconstruction algorithm runs quickly and provides accurate reconstructions. Results are presented for simulated and real images. 1 Introduction In the last few years, the geometric and algebraic relations between uncalibrated views have found lively interest in the computer vision community. A first key result states that, from two uncalibrated views, one can recover the 3D structure of a scene up to an unknown projective transformation [Fau92, HGC92]. The information one needs to do so is entirely contained in the fundam...
An ImageBased Approach to ThreeDimensional Computer Graphics
, 1997
"... The conventional approach to threedimensional computer graphics produces images from geometric scene descriptions by simulating the interaction of light with matter. My research explores an alternative approach that replaces the geometric scene description with perspective images and replaces the s ..."
Abstract

Cited by 167 (4 self)
 Add to MetaCart
The conventional approach to threedimensional computer graphics produces images from geometric scene descriptions by simulating the interaction of light with matter. My research explores an alternative approach that replaces the geometric scene description with perspective images and replaces the simulation process with data interpolation. I derive an imagewarping equation that maps the visible points in a reference image to their correct positions in any desired view. This mapping from reference image to desired image is determined by the centerofprojection and pinholecamera model of the two images and by a generalized disparity value associated with each point in the reference image. This generalized disparity value, which represents the structure of the scene, can be determined from point correspondences between multiple reference images. The imagewarping equation alone is insufficient to synthesize desired images because multiple referenceimage points may map to a single point. I derive a new visibility algorithm that determines a drawing order for the image warp. This algorithm results in correct visibility for the desired image independent of the reference image’s contents. The utility of the imagebased approach can be enhanced with a more general pinholecamera
On the geometry and algebra of the point and line correspondences between N images
, 1995
"... We explore the geometric and algebraic relations that exist between correspondences of points and lines in an arbitrary number of images. We propose to use the formalism of the GrassmannCayley algebra as the simplest way to make both geometric and algebraic statements in a very synthetic and effect ..."
Abstract

Cited by 149 (6 self)
 Add to MetaCart
We explore the geometric and algebraic relations that exist between correspondences of points and lines in an arbitrary number of images. We propose to use the formalism of the GrassmannCayley algebra as the simplest way to make both geometric and algebraic statements in a very synthetic and effective way (i.e. allowing actual computation if needed). We have a fairly complete picture of the situation in the case of points: there are only three types of algebraic relations which are satisfied by the coordinates of the images of a 3D point: bilinear relations arising when we consider pairs of images among the N and which are the wellknown epipolar constraints, trilinear relations arising when we consider triples of images among the N , and quadrilinear relations arising when we consider fourtuples of images among the N . In the case of lines, we show how the traditional perspective projection equation can be suitably generalized and that in the case of three images there exist two in...
Algebraic Functions For Recognition
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1994
"... In the general case, a trilinear relationship between three perspective views is shown to exist. The trilinearity result is shown to be of much practical use in visual recognition by alignment  yielding a direct reprojection method that cuts through the computations of camera transformation, sce ..."
Abstract

Cited by 147 (29 self)
 Add to MetaCart
In the general case, a trilinear relationship between three perspective views is shown to exist. The trilinearity result is shown to be of much practical use in visual recognition by alignment  yielding a direct reprojection method that cuts through the computations of camera transformation, scene structure and epipolar geometry. Moreover, the direct method is linear and sets a new lower theoretical bound on the minimal number of points that are required for a linear solution for the task of reprojection. The proof of the central result may be of further interest as it demonstrates certain regularities across homographies of the plane and introduces new view invariants. Experiments on simulated and real image data were conducted, including a comparative analysis with epipolar intersection and the linear combination methods, with results indicating a greater degree of robustness in practice and a higher level of performance in reprojection tasks. Keywords Visual Recognition, Al...
3D Scene Data Recovery using Omnidirectional Multibaseline Stereo
, 1995
"... A traditional approach to extracting geometric information from a large scene is to compute multiple 3D depth maps from stereo pairs or direct range finders, and then to merge the 3D data This is not only computationally intensive, but the resulting merged depth maps may be subject to merging erro ..."
Abstract

Cited by 121 (19 self)
 Add to MetaCart
A traditional approach to extracting geometric information from a large scene is to compute multiple 3D depth maps from stereo pairs or direct range finders, and then to merge the 3D data This is not only computationally intensive, but the resulting merged depth maps may be subject to merging errors, especially if the relative poses between depth maps are not known exactly. The 3D data may also have to be resampled before merging, which adds additional complexity and potential sources of errors. This paper provides a means of directly extracting 3D data covering a very wide field of view, thus bypassing the need for numerous depth map merging. In our work, cylindrical images are first composited from sequences of images taken while the camera is rotated 360 ffi about a vertical axis. By taking such image panoramas at different camera locations, we can recover 3D data of the scene using a set of simple techniques: feature tracking, an 8point structure from motion algorithm, and...
On Photometric Issues in 3D Visual Recognition From A Single 2D Image
 International Journal of Computer Vision
, 1997
"... . We describe the problem of recognition under changing illumination conditions and changing viewing positions from a computational and human vision perspective. On the computational side we focus on the mathematical problems of creating an equivalence class for images of the same 3D object undergo ..."
Abstract

Cited by 108 (6 self)
 Add to MetaCart
. We describe the problem of recognition under changing illumination conditions and changing viewing positions from a computational and human vision perspective. On the computational side we focus on the mathematical problems of creating an equivalence class for images of the same 3D object undergoing certain groups of transformations  mostly those due to changing illumination, and briefly discuss those due to changing viewing positions. The computational treatment culminates in proposing a simple scheme for recognizing, via alignment, an image of a familiar object taken from a novel viewing position and a novel illumination condition. On the human vision aspect, the paper is motivated by empirical evidence inspired by Mooney images of faces that suggest a relatively high level of visual processing is involved in compensating for photometric sources of variability, and furthermore, that certain limitations on the admissible representations of image information may exist. The psycho...
Factorization methods for projective structure and motion
 In IEEE Conf. Computer Vision & Pattern Recognition
, 1996
"... This paper describes a family of factorizationbased algorithms that recover 3D projective structure and motion from multiple uncalibrated perspective images of 3D points and lines. They can be viewed as generalizations of the TomasiKanade algorithm from affine to fully perspective cameras, and fro ..."
Abstract

Cited by 106 (5 self)
 Add to MetaCart
This paper describes a family of factorizationbased algorithms that recover 3D projective structure and motion from multiple uncalibrated perspective images of 3D points and lines. They can be viewed as generalizations of the TomasiKanade algorithm from affine to fully perspective cameras, and from points to lines. They make no restrictive assumptions about scene or camera geometry, and unlike most existing reconstruction methods they do not rely on ‘privileged’ points or images. All of the available image data is used, and each feature in each image is treated uniformly. The key to projective factorization is the recovery of a consistent set of projective depths (scale factors) for the image points: this is done using fundamental matrices and epipoles estimated from the image data. We compare the performance of the new techniques with several existing ones, and also describe an approximate factorization method that gives similar results to SVDbased factorization, but runs much more quickly for large problems.
Novel View Synthesis in Tensor Space
 In Proc. of IEEE Conference on Computer Vision and Pattern Recognition
, 1997
"... We present a new method for synthesizing novel views of a 3D scene from few model images in full correspondence. The core of this work is the derivation of a tensorial operator that describes the transformation from a given tensor of three views to a novel tensor of a new configuration of three view ..."
Abstract

Cited by 96 (8 self)
 Add to MetaCart
We present a new method for synthesizing novel views of a 3D scene from few model images in full correspondence. The core of this work is the derivation of a tensorial operator that describes the transformation from a given tensor of three views to a novel tensor of a new configuration of three views. By repeated application of the operator on a seed tensor with a sequence of desired virtual camera positions we obtain a chain of warping functions (tensors) from the set of model images to create the desired virtual views. 1. Introduction This paper addresses the problem of synthesizing a novel image, from an arbitrary viewing position, given a small number of model images (registered by means of an opticflow engine) of the 3D scene. The most significant aspect of our approach is the ability to synthesize images that are far away from the viewing positions of the sample model images without ever computing explicitly any 3D information about the scene. This property provides a multiimag...
Trilinearity of Three Perspective Views and its Associated Tensor
 In Proceedings of the International Conference on Computer Vision
, 1995
"... It has been established that certain trilinear froms of three perspective views give rise to a tensor of 27 intrinsic coefficients [11]. We show in this paper that a permutation of the the trilinear coefficients produces three homography matrices (projective transformations of planes) of three disti ..."
Abstract

Cited by 66 (15 self)
 Add to MetaCart
It has been established that certain trilinear froms of three perspective views give rise to a tensor of 27 intrinsic coefficients [11]. We show in this paper that a permutation of the the trilinear coefficients produces three homography matrices (projective transformations of planes) of three distinct intrinsic planes, respectively. This, in turn, yields the result that 3D invariants are recovered directly  simply by appropriate arrangement of the tensor's coefficients. On a secondary level, we show new relations between fundamental matrix, epipoles, Euclidean structure and the trilinear tensor. On the practical side, the new results extend the existing envelope of methods of 3D recovery from 2D views  for example, new linear methods that cut through the epipolar geometry, and new methods for computing epipolar geometry using redundancy available across many views. 1 Introduction Given that threedimensional (3D) objects in the world are modeled by point sets, then their proje...