Results 1  10
of
19
Zisserman A.: Metric rectification for perspective images of planes
 In Proc. of International Conference of Computer Vision and Pattern Recognition
, 1998
"... We describe the geometry, constraints and algorithmic implementation for metric rectification of planes. The rectification allows metric properties, such as angles and length ratios, to be measured on the world plane from a perspective image. The novel contributions are: first, that in a stratified ..."
Abstract

Cited by 125 (8 self)
 Add to MetaCart
We describe the geometry, constraints and algorithmic implementation for metric rectification of planes. The rectification allows metric properties, such as angles and length ratios, to be measured on the world plane from a perspective image. The novel contributions are: first, that in a stratified context the various forms of providing metric information, which include a known angle, two equal though unknown angles, and a known length ratio; can all be represented as circular constraints on the parameters of an affine transformation of the plane — this provides a simple and uniform framework for integrating constraints; second, direct rectification from right angles in the plane; third, it is shown that metric rectification enables calibration of the internal camera parameters; fourth, vanishing points are estimated using a Maximum Likelihood estimator; fifth, an algorithm for automatic rectification. Examples are given for a number of images, and applications demonstrated for texture map acquisition and metric measurements. 1
The design and implementation of a generic sparse bundle adjustment software package based on the levenbergmarquardt algorithm
, 2004
"... The most recent revision of this document will always be found at ..."
Abstract

Cited by 88 (4 self)
 Add to MetaCart
The most recent revision of this document will always be found at
Simultaneous Linear Estimation of Multiple View Geometry and Lens Distortion
, 2001
"... A bugbear of uncalibrated stereo reconstruction is that cameras which deviate from the pinhole model have to be precalibrated in order to correct for nonlinear lens distortion. If they are not, and point correspondence is attempted using the uncorrected images, the matching constraints provided by ..."
Abstract

Cited by 85 (1 self)
 Add to MetaCart
A bugbear of uncalibrated stereo reconstruction is that cameras which deviate from the pinhole model have to be precalibrated in order to correct for nonlinear lens distortion. If they are not, and point correspondence is attempted using the uncorrected images, the matching constraints provided by the fundamental matrix must be set so loose that point matching is significantly hampered. This paper shows how linear estimation of the fundamental matrix from twoview point correspondences may be augmented to include one term of radial lens distortion. This is achieved by (1) changing from the standard radiallens model to another which (as we show) has equivalent power, but which takes a simpler form in homogeneous coordinates, and (2) expressing fundamental matrix estimation as a Quadratic Eigenvalue Problem (QEP), for which efficient algorithms are well known. I derive the new estimator, and compare its performance against bundleadjusted calibrationgrid data. The new estimator is fast enough to be included in a RANSACbased matching loop, and we show cases of matching being rendered possible by its use. I show how the same lens can be calibrated in a natural scene where the lack of straight lines precludes most previous techniques. The modification when the multiview relation is a planar homography or trifocal tensor is described. 1.
Propagating covariance in computer vision
 In Proc. Workshop on Performance Characteristics of Vision Algorithms
, 1994
"... This paper describes how to propagate approximately additive random perturbations through any kind of vision algorithm step in which the appropriate random perturbation model for the estimated quantity produced by the vision step is also an additive random perturbation. We assume that the vision alg ..."
Abstract

Cited by 68 (10 self)
 Add to MetaCart
This paper describes how to propagate approximately additive random perturbations through any kind of vision algorithm step in which the appropriate random perturbation model for the estimated quantity produced by the vision step is also an additive random perturbation. We assume that the vision algorithm step can be modeled as a calculation (linear or nonlinear) that produces an estimate that minimizes an implicit scaler function of the input quantity and the calculated estimate. The only assumption is that the scaler function be nonnegative, have finite first and second partial derivatives, that its value is zero for ideal data, and that the random perturbations are small enough so that the relationship between the scaler function evaluated at the ideal but unknown input and output quantities and evaluated at the observed input quantity and perturbed output quantity can be approximated sufficiently well by a first order Taylor series expansion. The paper finally discusses the issues of verifying that the derived statistical behavior agrees with the experimentally observed statistical behavior.
Image Mosaicing and Superresolution
, 2004
"... The thesis investigates the problem of how information contained in multiple, overlapping images of the same scene may be combined to produce images of superior quality. This area, generically titled frame fusion, offers the possibility of reducing noise, extending the field of view, removal of movi ..."
Abstract

Cited by 49 (4 self)
 Add to MetaCart
The thesis investigates the problem of how information contained in multiple, overlapping images of the same scene may be combined to produce images of superior quality. This area, generically titled frame fusion, offers the possibility of reducing noise, extending the field of view, removal of moving objects, removing blur, increasing spatial resolution and improving dynamic range. As such, this research has many applications in fields as diverse as forensic image restoration, computer generated special effects, video image compression, and digital video editing. An essential enabling step prior to performing frame fusion is image registration, by which an accurate estimate of the pointtopoint mapping between views is computed. A robust and efficient algorithm is described to automatically register multiple images using only information contained within the images themselves. The accuracy of this method, and the statistical assumptions upon which it relies, are investigated empirically. Two forms of framefusion are investigated. The first is image mosaicing, which is the alignment of multiple images into a single composition representing part of a 3D scene.
Combining Scene and Autocalibration Constraints
, 1999
"... We present a simple approach to combining scene and autocalibration constraints for the calibration of cameras from single views and stereo pairs. Calibration constraints are provided by imaged scene structure, such as vanishing points of orthogonal directions, or rectified planes. In addition, con ..."
Abstract

Cited by 48 (0 self)
 Add to MetaCart
We present a simple approach to combining scene and autocalibration constraints for the calibration of cameras from single views and stereo pairs. Calibration constraints are provided by imaged scene structure, such as vanishing points of orthogonal directions, or rectified planes. In addition, constraints are available from the nature of the cameras and the motion between views. We formulate these constraints in terms of the geometry of the imaged absolute conic and its relationship to polepolar pairs and the imaged circular points of planes. Three significant advantages result: first, constraints from scene features, camera characteristics and autocalibration constraints provide linear equations in the elements of the image of the absolute conic. This means that constraints may easily be combined, and their solution is straightforward. Second, the degeneracies that occur when constraints are not independent may be easily identified. Lastly, the constraints from scene planes and i...
Computer Vision Applied to Superresolution
 IEEE Signal Processing Magazine
, 2003
"... this article is outlined in figure 1. The input images are first mutually aligned onto a common reference frame. This alignment involves not only a geometric component, but also a photometric component, modelling illumination, gain or colour balance variations among the images. After alignment a com ..."
Abstract

Cited by 43 (0 self)
 Add to MetaCart
this article is outlined in figure 1. The input images are first mutually aligned onto a common reference frame. This alignment involves not only a geometric component, but also a photometric component, modelling illumination, gain or colour balance variations among the images. After alignment a composite image mosaic may be rendered and superresolution restoration may be applied to any chosen region of interest
Automatic 3D Model Acquisition And Generation Of New Images From Video Sequences
 In Proceedings of European Signal Processing Conference
, 1998
"... We describe a method to completely automatically recover 3D scene structure together with 3D camera positions from a sequence of images acquired by an unknown camera undergoing unknown movement. Unlike "tuned" systems which use calibration objects or markers to recover this information, and are ther ..."
Abstract

Cited by 41 (0 self)
 Add to MetaCart
We describe a method to completely automatically recover 3D scene structure together with 3D camera positions from a sequence of images acquired by an unknown camera undergoing unknown movement. Unlike "tuned" systems which use calibration objects or markers to recover this information, and are therefore often limited to a particular scale, the approach of this paper is more general and can be applied to a large class of scenes. It is demonstrated here for interior and exterior sequences using both controlledmotion and handheld cameras. The paper reviews Computer Vision research into structure and motion recovery, providing a tutorial introduction to the geometry of multiple views, estimation and correspondence in video streams. The core method, which simultaneously extracts the 3D scene structure and camera positions, is applied to the automated recovery of VRML 3D textured models from a video sequence. 1 INTRODUCTION As virtual worlds demand ever more realistic 3D models, attentio...
VHS to VRML: 3D Graphical Models from Video Sequences
 In IEEE International Conference on Multimedia and Systems
, 1999
"... Abstract — We describe a method to completely automatically recover 3D scene structure together with a camera for each frame from a sequence of images acquired by an unknown camera undergoing unknown movement. Previous approaches have used calibration objects or landmarks to recover this information ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
Abstract — We describe a method to completely automatically recover 3D scene structure together with a camera for each frame from a sequence of images acquired by an unknown camera undergoing unknown movement. Previous approaches have used calibration objects or landmarks to recover this information, and are therefore often limited to a particular scale. The approach of this paper is far more general, since the “landmarks ” are derived directly from the imaged scene texture. The method can be applied to a large class of scenes and motions, and is demonstrated here for sequences of interior and exterior scenes using both controlledmotion and handheld cameras. We demonstrate two applications of this technology. The first is the construction of 3D graphical models of the scene; the second is the insertion of virtual objects into the original image sequence. Other applications include image compression and frame interpolation. I.
SelfCalibration from Image Sequences
, 1996
"... SelfCalibration from Image Sequences This thesis develops new algorithms to obtain the calibration parameters of a camera using only information contained in an image sequence, with the objective of using the camera calibration to compute a Euclidean reconstruction. This problem is known as selfcal ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
SelfCalibration from Image Sequences This thesis develops new algorithms to obtain the calibration parameters of a camera using only information contained in an image sequence, with the objective of using the camera calibration to compute a Euclidean reconstruction. This problem is known as selfcalibration. The motivation for this work is to allow the Euclidean reconstruction of a scene using only a prerecorded image sequence where no information is available on the camera or the objects in the scene. The approach used is to utilise known motion constraints, which are common for cameras mounted on mobile vehicles or robotic arms, to simplify the algebraic complexity of the selfcalibration problem. The algorithms are designed to be easily extendible to use multiple images rather than the minimum number of three required for selfcalibration. The uncertainty of the parameters are also computed to give a measure of confidence in the camera calibration. The first method uses a pure camera translation to allow the problem to be stratified by