Results 1  10
of
157
A Factorization Based Algorithm for MultiImage Projective Structure and Motion
, 1996
"... . We propose a method for the recovery of projective shape and motion from multiple images of a scene by the factorization of a matrix containing the images of all points in all views. This factorization is only possible when the image points are correctly scaled. The major technical contribution of ..."
Abstract

Cited by 260 (16 self)
 Add to MetaCart
. We propose a method for the recovery of projective shape and motion from multiple images of a scene by the factorization of a matrix containing the images of all points in all views. This factorization is only possible when the image points are correctly scaled. The major technical contribution of this paper is a practical method for the recovery of these scalings, using only fundamental matrices and epipoles estimated from the image data. The resulting projective reconstruction algorithm runs quickly and provides accurate reconstructions. Results are presented for simulated and real images. 1 Introduction In the last few years, the geometric and algebraic relations between uncalibrated views have found lively interest in the computer vision community. A first key result states that, from two uncalibrated views, one can recover the 3D structure of a scene up to an unknown projective transformation [Fau92, HGC92]. The information one needs to do so is entirely contained in the fundam...
A survey of imagebased rendering techniques
 In Videometrics, SPIE
, 1999
"... In this paper, we survey the techniques for imagebased rendering. Unlike traditional 3D computer graphics in which 3D geometry of the scene is known, imagebased rendering techniques render novel views directly from input images. Previous imagebased rendering techniques can be classified into thre ..."
Abstract

Cited by 169 (11 self)
 Add to MetaCart
(Show Context)
In this paper, we survey the techniques for imagebased rendering. Unlike traditional 3D computer graphics in which 3D geometry of the scene is known, imagebased rendering techniques render novel views directly from input images. Previous imagebased rendering techniques can be classified into three categories according to how much geometric information is used: rendering without geometry, rendering with implicit geometry (i.e., correspondence), and rendering with explicit geometry (either with approximate or accurate geometry). We discuss the characteristics of these categories and their representative methods. The continuum between images and geometry used in imagebased rendering techniques suggests that imagebased rendering with traditional 3D graphics can be united in a joint image and geometry space. Keywords: Imagebased rendering, survey. 1
On the geometry and algebra of the point and line correspondences between N images
, 1995
"... We explore the geometric and algebraic relations that exist between correspondences of points and lines in an arbitrary number of images. We propose to use the formalism of the GrassmannCayley algebra as the simplest way to make both geometric and algebraic statements in a very synthetic and effect ..."
Abstract

Cited by 162 (6 self)
 Add to MetaCart
We explore the geometric and algebraic relations that exist between correspondences of points and lines in an arbitrary number of images. We propose to use the formalism of the GrassmannCayley algebra as the simplest way to make both geometric and algebraic statements in a very synthetic and effective way (i.e. allowing actual computation if needed). We have a fairly complete picture of the situation in the case of points: there are only three types of algebraic relations which are satisfied by the coordinates of the images of a 3D point: bilinear relations arising when we consider pairs of images among the N and which are the wellknown epipolar constraints, trilinear relations arising when we consider triples of images among the N , and quadrilinear relations arising when we consider fourtuples of images among the N . In the case of lines, we show how the traditional perspective projection equation can be suitably generalized and that in the case of three images there exist two in...
On Photometric Issues in 3D Visual Recognition From A Single 2D Image
 International Journal of Computer Vision
, 1997
"... . We describe the problem of recognition under changing illumination conditions and changing viewing positions from a computational and human vision perspective. On the computational side we focus on the mathematical problems of creating an equivalence class for images of the same 3D object undergo ..."
Abstract

Cited by 120 (6 self)
 Add to MetaCart
(Show Context)
. We describe the problem of recognition under changing illumination conditions and changing viewing positions from a computational and human vision perspective. On the computational side we focus on the mathematical problems of creating an equivalence class for images of the same 3D object undergoing certain groups of transformations  mostly those due to changing illumination, and briefly discuss those due to changing viewing positions. The computational treatment culminates in proposing a simple scheme for recognizing, via alignment, an image of a familiar object taken from a novel viewing position and a novel illumination condition. On the human vision aspect, the paper is motivated by empirical evidence inspired by Mooney images of faces that suggest a relatively high level of visual processing is involved in compensating for photometric sources of variability, and furthermore, that certain limitations on the admissible representations of image information may exist. The psycho...
Factorization methods for projective structure and motion
 In IEEE Conf. Computer Vision & Pattern Recognition
, 1996
"... This paper describes a family of factorizationbased algorithms that recover 3D projective structure and motion from multiple uncalibrated perspective images of 3D points and lines. They can be viewed as generalizations of the TomasiKanade algorithm from affine to fully perspective cameras, and fro ..."
Abstract

Cited by 113 (5 self)
 Add to MetaCart
(Show Context)
This paper describes a family of factorizationbased algorithms that recover 3D projective structure and motion from multiple uncalibrated perspective images of 3D points and lines. They can be viewed as generalizations of the TomasiKanade algorithm from affine to fully perspective cameras, and from points to lines. They make no restrictive assumptions about scene or camera geometry, and unlike most existing reconstruction methods they do not rely on ‘privileged’ points or images. All of the available image data is used, and each feature in each image is treated uniformly. The key to projective factorization is the recovery of a consistent set of projective depths (scale factors) for the image points: this is done using fundamental matrices and epipoles estimated from the image data. We compare the performance of the new techniques with several existing ones, and also describe an approximate factorization method that gives similar results to SVDbased factorization, but runs much more quickly for large problems.
Novel View Synthesis in Tensor Space
 In Proc. of IEEE Conference on Computer Vision and Pattern Recognition
, 1997
"... We present a new method for synthesizing novel views of a 3D scene from few model images in full correspondence. The core of this work is the derivation of a tensorial operator that describes the transformation from a given tensor of three views to a novel tensor of a new configuration of three view ..."
Abstract

Cited by 113 (8 self)
 Add to MetaCart
(Show Context)
We present a new method for synthesizing novel views of a 3D scene from few model images in full correspondence. The core of this work is the derivation of a tensorial operator that describes the transformation from a given tensor of three views to a novel tensor of a new configuration of three views. By repeated application of the operator on a seed tensor with a sequence of desired virtual camera positions we obtain a chain of warping functions (tensors) from the set of model images to create the desired virtual views. 1. Introduction This paper addresses the problem of synthesizing a novel image, from an arbitrary viewing position, given a small number of model images (registered by means of an opticflow engine) of the 3D scene. The most significant aspect of our approach is the ability to synthesize images that are far away from the viewing positions of the sample model images without ever computing explicitly any 3D information about the scene. This property provides a multiimag...
Lines and Point in Three Views and the Trifocal Tensor
, 1997
"... This paper disc#274# the basic role of the trifoc al tensor insc#37 rec# nstr uc#r# n from three views. This 3 3 tensor plays a role in the analysis of sc#422 from three views analogous to the role played by the fundamental matrix in the twoviewc ase. In partic ular, the trifoc al tensor may ..."
Abstract

Cited by 89 (2 self)
 Add to MetaCart
This paper disc#274# the basic role of the trifoc al tensor insc#37 rec# nstr uc#r# n from three views. This 3 3 tensor plays a role in the analysis of sc#422 from three views analogous to the role played by the fundamental matrix in the twoviewc ase. In partic ular, the trifoc al tensor may bec omputed by a linear algorithm from a set of 13 linec orrespondenc#3 in three views. It is further shown in this paper, that the trifoc al tensor is essentially identic## to a set ofc oe#c#99 ts introduc#5 by Shashua toe#ec# point transfer in the three viewc##22 This observation means that the 13line algorithm may be extended to allow for thec omputation of the trifoc al tensor given any mixture of su#c#36 tly many line and pointc orrespondenc#9# From the trifoc al tensor thec amera matric## of the images may be c#25371# and the sc#35 may berec#31#41562# For unrelatedunc# libratedc ameras, this rec# nstr uc#r# n will be unique up to projec#939# y. Thus, projec#61 e rec#376#39162 of a set of lines and points may bec#40940 out linearly from three views.
Affine Structure from Line Correspondences with Uncalibrated Affine Cameras
 IEEE Trans. Pattern Analysis and Machine Intelligence
, 1997
"... This paper presents a linear algorithm for recovering 3D affine shape and motion from line correspondences with uncalibrated affine cameras. The algorithm requires a minimum of seven line correspondences over three views. The key idea is the introduction of a onedimensional projective camera. This ..."
Abstract

Cited by 81 (9 self)
 Add to MetaCart
(Show Context)
This paper presents a linear algorithm for recovering 3D affine shape and motion from line correspondences with uncalibrated affine cameras. The algorithm requires a minimum of seven line correspondences over three views. The key idea is the introduction of a onedimensional projective camera. This converts 3D affine reconstruction of "line directions" into 2D projective reconstruction of "points". In addition, a linebased factorisation method is also proposed to handle redundant views. Experimental results both on simulated and real image sequences validate the robustness and the accuracy of the algorithm.
Lens Distortion Calibration Using Point Correspondences
 In Proc. CVPR
, 1996
"... This paper describes a new method for lens distortion calibration using only point correspondences in multiple views, without the need to know either the 3D location of the points or the camera locations. The standard lens distortion model is a model of the deviations of a real camera from the ideal ..."
Abstract

Cited by 76 (3 self)
 Add to MetaCart
(Show Context)
This paper describes a new method for lens distortion calibration using only point correspondences in multiple views, without the need to know either the 3D location of the points or the camera locations. The standard lens distortion model is a model of the deviations of a real camera from the ideal pinhole or projective camera model. Given multiple views of a set of corresponding points taken by ideal pinhole cameras there exist epipolar and trilinear constraints among pairs and triplets of these views. In practice, due to noise in the feature detection and due to lens distortion these constraints do not hold exactly and we get some error. The calibration is a search for the lens distortion parameters that minimize this error. Using simulation and experimental results with real images we explore the properties of this method. We describe the use of this method with the standard lens distortion model, radial and decentering, but it could also be used with any other parametric distortio...
The Geometry of Projective Reconstruction I: Matching Constraints and the Joint Image
, 1995
"... This is a paper on the geometry of vision so there will be ‘too many equations, no algorithms and no real images’. However it also represents a powerful new way to think about projective vision and that does have practical consequences. To understand this paper you will need to be comfortable with t ..."
Abstract

Cited by 73 (9 self)
 Add to MetaCart
This is a paper on the geometry of vision so there will be ‘too many equations, no algorithms and no real images’. However it also represents a powerful new way to think about projective vision and that does have practical consequences. To understand this paper you will need to be comfortable with the tensorial approach to projective geometry: appendix A sketches the necessary background. This approach will be unfamiliar to many vision researchers, although a matheminria00548382,