Results 1  10
of
159
Flexible camera calibration by viewing a plane from unknown orientations
 in ICCV
, 1999
"... We propose a flexible new technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled ..."
Abstract

Cited by 317 (6 self)
 Add to MetaCart
We propose a flexible new technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled. The proposed procedure consists of a closedform solution, followed by a nonlinear refinement based on the maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique, and very good results have been obtained. Compared with classical techniques which use expensive equipment such as two or three orthogonal planes, the proposed technique is easy to use and flexible. It advances 3D computer vision one step from laboratory environments to real world use. The corresponding software is available from the author’s Web page.
A Factorization Based Algorithm for MultiImage Projective Structure and Motion
, 1996
"... . We propose a method for the recovery of projective shape and motion from multiple images of a scene by the factorization of a matrix containing the images of all points in all views. This factorization is only possible when the image points are correctly scaled. The major technical contribution of ..."
Abstract

Cited by 210 (15 self)
 Add to MetaCart
. We propose a method for the recovery of projective shape and motion from multiple images of a scene by the factorization of a matrix containing the images of all points in all views. This factorization is only possible when the image points are correctly scaled. The major technical contribution of this paper is a practical method for the recovery of these scalings, using only fundamental matrices and epipoles estimated from the image data. The resulting projective reconstruction algorithm runs quickly and provides accurate reconstructions. Results are presented for simulated and real images. 1 Introduction In the last few years, the geometric and algebraic relations between uncalibrated views have found lively interest in the computer vision community. A first key result states that, from two uncalibrated views, one can recover the 3D structure of a scene up to an unknown projective transformation [Fau92, HGC92]. The information one needs to do so is entirely contained in the fundam...
A survey of imagebased rendering techniques
 In Videometrics, SPIE
, 1999
"... In this paper, we survey the techniques for imagebased rendering. Unlike traditional 3D computer graphics in which 3D geometry of the scene is known, imagebased rendering techniques render novel views directly from input images. Previous imagebased rendering techniques can be classified into thre ..."
Abstract

Cited by 136 (8 self)
 Add to MetaCart
In this paper, we survey the techniques for imagebased rendering. Unlike traditional 3D computer graphics in which 3D geometry of the scene is known, imagebased rendering techniques render novel views directly from input images. Previous imagebased rendering techniques can be classified into three categories according to how much geometric information is used: rendering without geometry, rendering with implicit geometry (i.e., correspondence), and rendering with explicit geometry (either with approximate or accurate geometry). We discuss the characteristics of these categories and their representative methods. The continuum between images and geometry used in imagebased rendering techniques suggests that imagebased rendering with traditional 3D graphics can be united in a joint image and geometry space. Keywords: Imagebased rendering, survey. 1
Robust parameter estimation in computer vision
 SIAM Reviews
, 1999
"... Abstract. Estimation techniques in computer vision applications must estimate accurate model parameters despite smallscale noise in the data, occasional largescale measurement errors (outliers), and measurements from multiple populations in the same data set. Increasingly, robust estimation techni ..."
Abstract

Cited by 129 (10 self)
 Add to MetaCart
Abstract. Estimation techniques in computer vision applications must estimate accurate model parameters despite smallscale noise in the data, occasional largescale measurement errors (outliers), and measurements from multiple populations in the same data set. Increasingly, robust estimation techniques, some borrowed from the statistics literature and others described in the computer vision literature, have been used in solving these parameter estimation problems. Ideally, these techniques should effectively ignore the outliers and measurements from other populations, treating them as outliers, when estimating the parameters of a single population. Two frequently used techniques are leastmedian of
3D Scene Data Recovery using Omnidirectional Multibaseline Stereo
, 1995
"... A traditional approach to extracting geometric information from a large scene is to compute multiple 3D depth maps from stereo pairs or direct range finders, and then to merge the 3D data This is not only computationally intensive, but the resulting merged depth maps may be subject to merging erro ..."
Abstract

Cited by 121 (18 self)
 Add to MetaCart
A traditional approach to extracting geometric information from a large scene is to compute multiple 3D depth maps from stereo pairs or direct range finders, and then to merge the 3D data This is not only computationally intensive, but the resulting merged depth maps may be subject to merging errors, especially if the relative poses between depth maps are not known exactly. The 3D data may also have to be resampled before merging, which adds additional complexity and potential sources of errors. This paper provides a means of directly extracting 3D data covering a very wide field of view, thus bypassing the need for numerous depth map merging. In our work, cylindrical images are first composited from sequences of images taken while the camera is rotated 360 ffi about a vertical axis. By taking such image panoramas at different camera locations, we can recover 3D data of the scene using a set of simple techniques: feature tracking, an 8point structure from motion algorithm, and...
Factorization methods for projective structure and motion
 In IEEE Conf. Computer Vision & Pattern Recognition
, 1996
"... This paper describes a family of factorizationbased algorithms that recover 3D projective structure and motion from multiple uncalibrated perspective images of 3D points and lines. They can be viewed as generalizations of the TomasiKanade algorithm from affine to fully perspective cameras, and fro ..."
Abstract

Cited by 106 (5 self)
 Add to MetaCart
This paper describes a family of factorizationbased algorithms that recover 3D projective structure and motion from multiple uncalibrated perspective images of 3D points and lines. They can be viewed as generalizations of the TomasiKanade algorithm from affine to fully perspective cameras, and from points to lines. They make no restrictive assumptions about scene or camera geometry, and unlike most existing reconstruction methods they do not rely on ‘privileged’ points or images. All of the available image data is used, and each feature in each image is treated uniformly. The key to projective factorization is the recovery of a consistent set of projective depths (scale factors) for the image points: this is done using fundamental matrices and epipoles estimated from the image data. We compare the performance of the new techniques with several existing ones, and also describe an approximate factorization method that gives similar results to SVDbased factorization, but runs much more quickly for large problems.
Simultaneous Linear Estimation of Multiple View Geometry and Lens Distortion
, 2001
"... A bugbear of uncalibrated stereo reconstruction is that cameras which deviate from the pinhole model have to be precalibrated in order to correct for nonlinear lens distortion. If they are not, and point correspondence is attempted using the uncorrected images, the matching constraints provided by ..."
Abstract

Cited by 86 (1 self)
 Add to MetaCart
A bugbear of uncalibrated stereo reconstruction is that cameras which deviate from the pinhole model have to be precalibrated in order to correct for nonlinear lens distortion. If they are not, and point correspondence is attempted using the uncorrected images, the matching constraints provided by the fundamental matrix must be set so loose that point matching is significantly hampered. This paper shows how linear estimation of the fundamental matrix from twoview point correspondences may be augmented to include one term of radial lens distortion. This is achieved by (1) changing from the standard radiallens model to another which (as we show) has equivalent power, but which takes a simpler form in homogeneous coordinates, and (2) expressing fundamental matrix estimation as a Quadratic Eigenvalue Problem (QEP), for which efficient algorithms are well known. I derive the new estimator, and compare its performance against bundleadjusted calibrationgrid data. The new estimator is fast enough to be included in a RANSACbased matching loop, and we show cases of matching being rendered possible by its use. I show how the same lens can be calibrated in a natural scene where the lack of straight lines precludes most previous techniques. The modification when the multiview relation is a planar homography or trifocal tensor is described. 1.
Modelling and interpretation of architecture from several images
"... The modelling of 3dimensional (3D) environments has become a requirement for many applications in engineering design, virtual reality, visualisation and entertainment. However the scale and complexity demanded from such models has risen to the point where the acquisition of 3D models can require a ..."
Abstract

Cited by 82 (6 self)
 Add to MetaCart
The modelling of 3dimensional (3D) environments has become a requirement for many applications in engineering design, virtual reality, visualisation and entertainment. However the scale and complexity demanded from such models has risen to the point where the acquisition of 3D models can require a vast amount of specialist time and equipment. Because of this much research has been undertaken in the computer vision community into automating all or part of the process of acquiring a 3D model from a sequence of images. This thesis focuses specifically on the automatic acquisition of architectural models from short image sequences. An architectural model is defined as a set of planes corresponding to walls which contain a variety of labelled primitives such as doors and windows. As well as a label defining its type, each primitive contains parameters defining its shape and texture. The key advantage of this representation is that the model defines not only geometry and texture, but also an interpretation of the scene. This is crucial as it enables reasoning about the scene; for instance, structure and texture can be inferred in areas of the model which are unseen in any
Lines and Point in Three Views and the Trifocal Tensor
, 1997
"... This paper disc#274# the basic role of the trifoc al tensor insc#37 rec# nstr uc#r# n from three views. This 3 3 tensor plays a role in the analysis of sc#422 from three views analogous to the role played by the fundamental matrix in the twoviewc ase. In partic ular, the trifoc al tensor may ..."
Abstract

Cited by 72 (2 self)
 Add to MetaCart
This paper disc#274# the basic role of the trifoc al tensor insc#37 rec# nstr uc#r# n from three views. This 3 3 tensor plays a role in the analysis of sc#422 from three views analogous to the role played by the fundamental matrix in the twoviewc ase. In partic ular, the trifoc al tensor may bec omputed by a linear algorithm from a set of 13 linec orrespondenc#3 in three views. It is further shown in this paper, that the trifoc al tensor is essentially identic## to a set ofc oe#c#99 ts introduc#5 by Shashua toe#ec# point transfer in the three viewc##22 This observation means that the 13line algorithm may be extended to allow for thec omputation of the trifoc al tensor given any mixture of su#c#36 tly many line and pointc orrespondenc#9# From the trifoc al tensor thec amera matric## of the images may be c#25371# and the sc#35 may berec#31#41562# For unrelatedunc# libratedc ameras, this rec# nstr uc#r# n will be unique up to projec#939# y. Thus, projec#61 e rec#376#39162 of a set of lines and points may bec#40940 out linearly from three views.
Epipolar Geometry for Panoramic Cameras
, 1998
"... . This paper presents fundamental theory and design of central panoramic cameras. Panoramic cameras combine a convex hyperbolic or parabolic mirror with a perspective camera to obtain a large field of view. We show how to design a panoramic camera with a tractable geometry and we propose a simple ca ..."
Abstract

Cited by 63 (10 self)
 Add to MetaCart
. This paper presents fundamental theory and design of central panoramic cameras. Panoramic cameras combine a convex hyperbolic or parabolic mirror with a perspective camera to obtain a large field of view. We show how to design a panoramic camera with a tractable geometry and we propose a simple calibration method. We derive the image formation function for such a camera. The main contribution of the paper is the derivation of the epipolar geometry between a pair of panoramic cameras. We show that the mathematical model of a central panoramic camera can be decomposed into two central projections and therefore allows an epipolar geometry formulation. It is shown that epipolar curves are conics and their equations are derived. The theory is tested in experiments with real data. Keywords: omnidirectional vision, epipolar geometry, panoramic cameras, hyperbolic mirror, stereo, catadioptric sensors. 1 Introduction It is well known that egomotion estimation algorithms in some cases cannot ...