Results 1  10
of
143
Flexible camera calibration by viewing a plane from unknown orientations
, 1999
"... We propose a flexible new technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled ..."
Abstract

Cited by 446 (6 self)
 Add to MetaCart
(Show Context)
We propose a flexible new technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled. The proposed procedure consists of a closedform solution, followed by a nonlinear refinement based on the maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique, and very good results have been obtained. Compared with classical techniques which use expensive equipment such as two or three orthogonal planes, the proposed technique is easy to use and flexible. It advances 3D computer vision one step from laboratory environments to real world use. The corresponding software is available from the author’s Web page.
Zisserman A.: Metric rectification for perspective images of planes
 In Proc. of International Conference of Computer Vision and Pattern Recognition
, 1998
"... We describe the geometry, constraints and algorithmic implementation for metric rectification of planes. The rectification allows metric properties, such as angles and length ratios, to be measured on the world plane from a perspective image. The novel contributions are: first, that in a stratified ..."
Abstract

Cited by 162 (9 self)
 Add to MetaCart
(Show Context)
We describe the geometry, constraints and algorithmic implementation for metric rectification of planes. The rectification allows metric properties, such as angles and length ratios, to be measured on the world plane from a perspective image. The novel contributions are: first, that in a stratified context the various forms of providing metric information, which include a known angle, two equal though unknown angles, and a known length ratio; can all be represented as circular constraints on the parameters of an affine transformation of the plane — this provides a simple and uniform framework for integrating constraints; second, direct rectification from right angles in the plane; third, it is shown that metric rectification enables calibration of the internal camera parameters; fourth, vanishing points are estimated using a Maximum Likelihood estimator; fifth, an algorithm for automatic rectification. Examples are given for a number of images, and applications demonstrated for texture map acquisition and metric measurements. 1
I.Reid. Selfcalibration of rotating and zooming cameras
 International Journal of Computer Vision (IJCV
"... ..."
The problem of degeneracy in structure and motion recovery from uncalibrated image sequences
 Int. J. Comput. Vis
, 1999
"... ..."
(Show Context)
Camera Calibration with OneDimensional Objects
, 2004
"... Camera calibration has been studied extensively in computer vision and photogrammetry and the proposed techniques in the literature include those using 3D apparatus (two or three planes orthogonal to each other or a plane undergoing a pure translation, etc.), 2D objects (planar patterns undergoing ..."
Abstract

Cited by 57 (1 self)
 Add to MetaCart
Camera calibration has been studied extensively in computer vision and photogrammetry and the proposed techniques in the literature include those using 3D apparatus (two or three planes orthogonal to each other or a plane undergoing a pure translation, etc.), 2D objects (planar patterns undergoing unknown motions), and 0D features (selfcalibration using unknown scene points). Yet, this paper proposes a new calibration technique using 1D objects (points aligned on a line), thus filling the missing dimension in calibration. In particular, we show that camera calibration is not possible with freemoving 1D objects, but can be solved if one point is fixed. A closedform solution is developed if six or more observations of such a 1D object are made. For higher accuracy, a nonlinear technique based on the maximum likelihood criterion is then used to refine the estimate. Singularities have also been studied. Besides the theoretical aspect, the proposed technique is also important in practice especially when calibrating multiple cameras mounted apart from each other, where the calibration objects are required to be visible simultaneously.
Multibody Structure and Motion: 3D Reconstruction of Independently Moving Objects
 In European Conference on Computer Vision
, 2000
"... . This paper extends the recovery of structure and motion to image sequences with several independently moving objects. The motion, structure, and camera calibration are all apriori unknown. The fundamental constraint that we introduce is that multiple motions must share the same camera paramete ..."
Abstract

Cited by 49 (0 self)
 Add to MetaCart
(Show Context)
. This paper extends the recovery of structure and motion to image sequences with several independently moving objects. The motion, structure, and camera calibration are all apriori unknown. The fundamental constraint that we introduce is that multiple motions must share the same camera parameters. Existing work on independent motions has not employed this constraint, and therefore has not gained over independent staticscene reconstructions. We show how this constraint leads to several new results in structure and motion recovery, where Euclidean reconstruction becomes possible in the multibody case, when it was underconstrained for a static scene. We show how to combine motions of highrelief, lowrelief and planar objects. Additionally we show that structure and motion can be recovered from just 4 points in the uncalibrated, fixed camera, case. Experiments on real and synthetic imagery demonstrate the validity of the theory and the improvement in accuracy obtained usin...
Camera pose and calibration from 4 or 5 known 3D points
 In Proc. 7th Int. Conf. on Computer Vision
, 1999
"... We describe two direct quasilinear methods for camera pose (absolute orientation) and calibration from a single image of 4 or 5 known 3D points. They generalize the 6 point ‘Direct Linear Transform ’ method by incorporating partial prior camera knowledge, while still allowing some unknown calibratio ..."
Abstract

Cited by 42 (0 self)
 Add to MetaCart
(Show Context)
We describe two direct quasilinear methods for camera pose (absolute orientation) and calibration from a single image of 4 or 5 known 3D points. They generalize the 6 point ‘Direct Linear Transform ’ method by incorporating partial prior camera knowledge, while still allowing some unknown calibration parameters to be recovered. Only linear algebra is required, the solution is unique in nondegenerate cases, and additional points can be included for improved stability. Both methods fail for coplanar points, but we give an experimental eigendecomposition based one that handles both planar and nonplanar cases. Our methods use recent polynomial solving technology, and we give a brief summary of this. One of our aims was to try to understand the numerical behaviour of modern polynomial solvers on some relatively simple test cases, with a view to other vision applications.
AppearanceGuided Monocular Omnidirectional Visual Odometry for Outdoor Ground Vehicles
, 2008
"... In this paper, we describe a realtime algorithm for computing the egomotion of a vehicle relative to the road. The algorithm uses as input only those images provided by a single omnidirectional camera mounted on the roof of the vehicle. The front ends of the system are two different trackers. The ..."
Abstract

Cited by 39 (7 self)
 Add to MetaCart
(Show Context)
In this paper, we describe a realtime algorithm for computing the egomotion of a vehicle relative to the road. The algorithm uses as input only those images provided by a single omnidirectional camera mounted on the roof of the vehicle. The front ends of the system are two different trackers. The first one is a homographybased tracker that detects and matches robust scaleinvariant features that most likely belong to the ground plane. The second one uses an appearancebased approach and gives highresolution estimates of the rotation of the vehicle. This planar pose estimation method has been successfully applied to videos from an automotive platform. We give an example of camera trajectory estimated purely from omnidirectional images over a distance of 400 m. For performance evaluation, the estimated path is superimposed onto a satellite image. In the end, we use image mosaicing to obtain a textured 2D reconstruction of the estimated path.
Multiview Constraints on Homographies
 ieee Transactions on Pattern Analysis and Machine Intelligence
, 2002
"... The image motion of a planar surface between two camera views is captured by a homography (a 2D projective transformation). The homography depends on the intrinsic and extrinsic camera parameters, as well as on the 3D plane parameters. ..."
Abstract

Cited by 38 (0 self)
 Add to MetaCart
(Show Context)
The image motion of a planar surface between two camera views is captured by a homography (a 2D projective transformation). The homography depends on the intrinsic and extrinsic camera parameters, as well as on the 3D plane parameters.