Results 1  10
of
41
Canonic Representations for the Geometries of Multiple Projective Views
 Computer Vision and Image Understanding
, 1994
"... This work is in the context of motion and stereo analysis. It presents a new uni ed representation which will be useful when dealing with multiple views in the case of uncalibrated cameras. Several levels of information might be considered, depending on the availability of information. Among other t ..."
Abstract

Cited by 180 (8 self)
 Add to MetaCart
This work is in the context of motion and stereo analysis. It presents a new uni ed representation which will be useful when dealing with multiple views in the case of uncalibrated cameras. Several levels of information might be considered, depending on the availability of information. Among other things, an algebraic description of the epipolar geometry of N views is introduced, as well as a framework for camera selfcalibration, calibration updating, and structure from motion in an image sequence taken by a camera which is zooming and moving at the same time. We show how a special decomposition of a set of two or three general projection matrices, called canonical enables us to build geometric descriptions for a system of cameras which are invariant with respect to a given group of transformations. These representations are minimal and capture completely the properties of each level of description considered: Euclidean (in the context of calibration, and in the context of structure from motion, which we distinguish clearly), a ne, and projective, that we also relate to each other. In the last case, a new decomposition of the wellknown fundamental matrix is obtained. Dependencies, which appear when three or more views are available, are studied in the context of the canonic decomposition, and new composition formulas are established. The theory is illustrated by tutorial examples with real images.
Selfcalibration from multiple views with a rotating camera
, 1994
"... Abstract. A newpractical method is given for the selfcalibration of a camera. In this method, at least three images are taken from the same point in space with different orientations of the camera and calibration is computed from an analysis of point matches between the images. The method requires ..."
Abstract

Cited by 148 (1 self)
 Add to MetaCart
Abstract. A newpractical method is given for the selfcalibration of a camera. In this method, at least three images are taken from the same point in space with different orientations of the camera and calibration is computed from an analysis of point matches between the images. The method requires no knowledge of the orientations of the camera. Calibration is based on the image correspondences only. This method differs fundamentally from previous results by Maybank and Faugeras on selfcalibration using the epipolar structure of image pairs. In the method of this paper, there is no epipolar structure since all images are taken from the same point in space. Since the images are all taken from the same point in space, determination of point matches is considerably easier than for images taken with a moving camera, since problems of occlusion or change of aspect or illumination do not occur. The calibration method is evaluated on several sets of synthetic and real image data. 1
Critical motion sequences for monocular selfcalibration and uncalibrated euclidean reconstruction
, 1997
"... Abstract. In this paper, sequences of camera motions that lead to inherent ambiguities in uncalibrated Euclidean reconstruction or selfcalibration are studied. Our main contribution is a complete, detailed classification of these critical motion sequences (CMS). The practically important classes ar ..."
Abstract

Cited by 102 (5 self)
 Add to MetaCart
Abstract. In this paper, sequences of camera motions that lead to inherent ambiguities in uncalibrated Euclidean reconstruction or selfcalibration are studied. Our main contribution is a complete, detailed classification of these critical motion sequences (CMS). The practically important classes are identified and their degrees of ambiguity are derived. We also discuss some practical issues, especially concerning the reduction of the ambiguity of a reconstruction. 1
Selfcalibration of stationary cameras
 International Journal of Computer Vision
, 1997
"... A new practical method is given for the selfcalibration of a camera. In this method, at least three images are taken from the same point in space with different orientations of the camera and calibration is computed from an analysis of point matches between the images. The method requires no knowle ..."
Abstract

Cited by 101 (1 self)
 Add to MetaCart
A new practical method is given for the selfcalibration of a camera. In this method, at least three images are taken from the same point in space with different orientations of the camera and calibration is computed from an analysis of point matches between the images. The method requires no knowledge of the orientations of the camera. Calibration is based on the image correspondences only. This method differs fundamentally from previous results by Maybank and Faugeras on selfcalibration using the epipolar structure of image pairs. In the method of this paper, there is no epipolar structure since all images are taken from the same point in space, and so Maybank and Faugeras’s method does not apply. Since the images are all taken from the same point in space, determination of point matches is considerably easier than for images taken with a moving camera, since problems of occlusion or change of aspect or illumination do not occur. A noniterative calibration algorithm is given that works with any number of images. An iterative refinement method that may be used with noisy data is also described. The algorithm is implemented and validated on several sets of synthetic and real image data.
SelfCalibration of a Moving Camera From Point Correspondences and Fundamental Matrices
, 1997
"... . We address the problem of estimating threedimensional motion, and structure from motion with an uncalibrated moving camera. We show that point correspondences between three images, and the fundamental matrices computed from these point correspondences, are sufficient to recover the internal orien ..."
Abstract

Cited by 99 (2 self)
 Add to MetaCart
. We address the problem of estimating threedimensional motion, and structure from motion with an uncalibrated moving camera. We show that point correspondences between three images, and the fundamental matrices computed from these point correspondences, are sufficient to recover the internal orientation of the camera (its calibration), the motion parameters, and to compute coherent perspective projection matrices which enable us to reconstruct 3D structure up to a similarity. In contrast with other methods, no calibration object with a known 3D shape is needed, and no limitations are put upon the unknown motions to be performed or the parameters to be recovered, as long as they define a projective camera. The theory of the method, which is based on the constraint that the observed points are part of a static scene, thus allowing us to link the intrinsic parameters and the fundamental matrix via the absolute conic, is first detailed. Several algorithms are then presented, and their ...
Simultaneous Linear Estimation of Multiple View Geometry and Lens Distortion
, 2001
"... A bugbear of uncalibrated stereo reconstruction is that cameras which deviate from the pinhole model have to be precalibrated in order to correct for nonlinear lens distortion. If they are not, and point correspondence is attempted using the uncorrected images, the matching constraints provided by ..."
Abstract

Cited by 85 (1 self)
 Add to MetaCart
A bugbear of uncalibrated stereo reconstruction is that cameras which deviate from the pinhole model have to be precalibrated in order to correct for nonlinear lens distortion. If they are not, and point correspondence is attempted using the uncorrected images, the matching constraints provided by the fundamental matrix must be set so loose that point matching is significantly hampered. This paper shows how linear estimation of the fundamental matrix from twoview point correspondences may be augmented to include one term of radial lens distortion. This is achieved by (1) changing from the standard radiallens model to another which (as we show) has equivalent power, but which takes a simpler form in homogeneous coordinates, and (2) expressing fundamental matrix estimation as a Quadratic Eigenvalue Problem (QEP), for which efficient algorithms are well known. I derive the new estimator, and compare its performance against bundleadjusted calibrationgrid data. The new estimator is fast enough to be included in a RANSACbased matching loop, and we show cases of matching being rendered possible by its use. I show how the same lens can be calibrated in a natural scene where the lack of straight lines precludes most previous techniques. The modification when the multiview relation is a planar homography or trifocal tensor is described. 1.
Lens Distortion Calibration Using Point Correspondences
 In Proc. CVPR
, 1996
"... This paper describes a new method for lens distortion calibration using only point correspondences in multiple views, without the need to know either the 3D location of the points or the camera locations. The standard lens distortion model is a model of the deviations of a real camera from the ideal ..."
Abstract

Cited by 69 (3 self)
 Add to MetaCart
This paper describes a new method for lens distortion calibration using only point correspondences in multiple views, without the need to know either the 3D location of the points or the camera locations. The standard lens distortion model is a model of the deviations of a real camera from the ideal pinhole or projective camera model. Given multiple views of a set of corresponding points taken by ideal pinhole cameras there exist epipolar and trilinear constraints among pairs and triplets of these views. In practice, due to noise in the feature detection and due to lens distortion these constraints do not hold exactly and we get some error. The calibration is a search for the lens distortion parameters that minimize this error. Using simulation and experimental results with real images we explore the properties of this method. We describe the use of this method with the standard lens distortion model, radial and decentering, but it could also be used with any other parametric distortio...
Accurate Internal Camera Calibration using Rotation, with Analysis of Sources of Error
 In Fifth International Conference on Computer Vision (ICCV’95
, 1995
"... 1 This paper describes a simple and accurate method for internal camera calibration based on tracking image features through a sequence of images while the camera undergoes pure rotation. A special calibration object is not required and the method can therefore be used both for laboratory calibrati ..."
Abstract

Cited by 68 (1 self)
 Add to MetaCart
1 This paper describes a simple and accurate method for internal camera calibration based on tracking image features through a sequence of images while the camera undergoes pure rotation. A special calibration object is not required and the method can therefore be used both for laboratory calibration and for self calibration in autonomous robots. Experimental results with real images show that focal length and aspect ratio can be found to within 0.15 percent, and lens distortion error can be reduced to a fraction of a pixel. The location of the principal point and the location of the center of radial distortion can each be found to within a few pixels. We perform a simple analysis to show to what extent the various technical details affect the accuracy of the results. We show that having pure rotation is important if the features are derived from objects close to the camera. In the basic method accurate angle measurement is important. The need to accurately measure the angles can be...
Catadioptric SelfCalibration
, 2000
"... We have assembled astandH460 movable system that can capture long sequences ofomnid ectional images (up to 1,500 images at 6.7 Hzand a resolution of 1140 1030). The goal of this system is to reconstruct complex large environments, such as an entire floor of a buildH4 from the captured images on ..."
Abstract

Cited by 61 (0 self)
 Add to MetaCart
We have assembled astandH460 movable system that can capture long sequences ofomnid ectional images (up to 1,500 images at 6.7 Hzand a resolution of 1140 1030). The goal of this system is to reconstruct complex large environments, such as an entire floor of a buildH4 from the captured images only. In this paper, wead ess the important issue of how to calibrate such a system. Our method uses images of the environment to calibrate the camera, without the use of a y specia ca93fl68900 pa93fl6 knowledge ofca08G motion, or knowledge of scene geometry. It uses the consistency of pairwise tracked point features across a sequence based on the characteristics of catad4H35 imaging. We also show how the projection equation for this catad0H30 camera can be formulated to be equivalent to that of a typical rectilinear perspective camera with just a simple transformation. 1 Introduction The visua63fl07'9 as modeling ofla00 environments is increa06DG' becominga aoming32 e proposition, due tof...
Selfcalibration of an Affine Camera from Multiple Views
 International Journal of Computer Vision
, 1994
"... A key limitation of all existing algorithms for shape and motion from image sequences under orthographic, weak perspective and paraperspective projection is that they require the calibration parameters of the camera. We present in this paper a new approach that allows the shape and motion to be com ..."
Abstract

Cited by 49 (6 self)
 Add to MetaCart
A key limitation of all existing algorithms for shape and motion from image sequences under orthographic, weak perspective and paraperspective projection is that they require the calibration parameters of the camera. We present in this paper a new approach that allows the shape and motion to be computed from image sequences without having to know the calibration parameters. This approach is derived with the affine camera model, introduced by Mundy and Zisserman [18], which is a more general class of projections including orthographic, weak perspective and paraperspective projection models. The concept of selfcalibration, introduced by Maybank and Faugeras in [16] for the perspective camera and by Hartley for the rotating camera in [10], is then applied for the affine camera. This paper introduces the 3 intrinsic parameters that the affine camera can have at most. The intrinsic parameters of the affine camera are closely related to the usual intrinsic parameters of the pinhole persp...