Results 1 
5 of
5
Simultaneous Linear Estimation of Multiple View Geometry and Lens Distortion
, 2001
"... A bugbear of uncalibrated stereo reconstruction is that cameras which deviate from the pinhole model have to be precalibrated in order to correct for nonlinear lens distortion. If they are not, and point correspondence is attempted using the uncorrected images, the matching constraints provided by ..."
Abstract

Cited by 128 (1 self)
 Add to MetaCart
A bugbear of uncalibrated stereo reconstruction is that cameras which deviate from the pinhole model have to be precalibrated in order to correct for nonlinear lens distortion. If they are not, and point correspondence is attempted using the uncorrected images, the matching constraints provided by the fundamental matrix must be set so loose that point matching is significantly hampered. This paper shows how linear estimation of the fundamental matrix from twoview point correspondences may be augmented to include one term of radial lens distortion. This is achieved by (1) changing from the standard radiallens model to another which (as we show) has equivalent power, but which takes a simpler form in homogeneous coordinates, and (2) expressing fundamental matrix estimation as a Quadratic Eigenvalue Problem (QEP), for which efficient algorithms are well known. I derive the new estimator, and compare its performance against bundleadjusted calibrationgrid data. The new estimator is fast enough to be included in a RANSACbased matching loop, and we show cases of matching being rendered possible by its use. I show how the same lens can be calibrated in a natural scene where the lack of straight lines precludes most previous techniques. The modification when the multiview relation is a planar homography or trifocal tensor is described. 1.
Tracking from Multiple View Points: Selfcalibration of Space and Time
 IN DARPA IU WORKSHOP
, 1998
"... This paper tackles the problem of selfcalibration of multiple cameras which are very far apart. Given a set of feature correspondences one can determine the camera geometry. The key problem we address is finding such correspondences. Since the camera geometry (location and orientation) and photome ..."
Abstract

Cited by 90 (1 self)
 Add to MetaCart
This paper tackles the problem of selfcalibration of multiple cameras which are very far apart. Given a set of feature correspondences one can determine the camera geometry. The key problem we address is finding such correspondences. Since the camera geometry (location and orientation) and photometric characteristics vary considerably between images one cannot use brightness and/or proximity constraints. Instead we propose a three step approach: first we use moving objects in the scene to determine a rough planar alignment, next we use static features to improve the alignment, finally we use o# plane features to determine the epipolar geometry and the horizon line. We do not assume synchronized cameras and we show that enforcing the geometric constraints enables us to align the tracking data in time. We present results on challenging outdoor scenes using real time tracking data.
Modelbased Brightness Constraints: on Direct Estimation of Structure and Motion
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1997
"... We describe a new direct method for estimating structure and motion from image intensities of multiple views. We extend the direct methods of [9] to three views. Adding the third view enables us to solve for motion, and compute a dense depth map of the scene, directly from image spatiotemporal deriv ..."
Abstract

Cited by 43 (0 self)
 Add to MetaCart
(Show Context)
We describe a new direct method for estimating structure and motion from image intensities of multiple views. We extend the direct methods of [9] to three views. Adding the third view enables us to solve for motion, and compute a dense depth map of the scene, directly from image spatiotemporal derivatives in a linear manner without first having to find point correspondences or compute optical flow. We describe the advantages and limitations of this method which are then verified with experiments using real images. 1. Introduction We present a new method for computing egomotion and dense structure from three views. This method can be viewed as an extension of the 'direct methods' of Horn & Weldon [9] from two views (one motion) to three views (two motions). These methods are dubbed 'direct methods' because they do not require prior computation of optical flow. Within a coarsetofine implementation our method can handle displacements averaging up to 50 pixels for 640\Theta 480 resolut...
A Robust Method for Computing Vehicle Egomotion
 In IEEE Intelligent Vehicles Symposium (IV2000
, 2000
"... We describe a robust method for computing the egomotion of the vehicle relative to the road using input from a single camera mounted next to the rear view mirror. Since feature points are unreliable in cluttered scenes we use direct methods where image values in the two images are combined in a glo ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
(Show Context)
We describe a robust method for computing the egomotion of the vehicle relative to the road using input from a single camera mounted next to the rear view mirror. Since feature points are unreliable in cluttered scenes we use direct methods where image values in the two images are combined in a global probability function. Combined with the use of probability distribution matrices, this enables the formulation of a robust method that can ignore large number of outliers as one would encounter in real traffic situations. The method has been tested in real world environments and has been shown to be robust to glare, rain and moving objects in the scene. 1 Introduction Accurate estimation of the egomotion of the vehicle relative to the road is a key component for autonomous driving and computer vision based driving assistance. We describe a robust method for computing the egomotion of the vehicle relative to the road using input from a single camera rigidly mounted next to the rear vi...
A Custom Computing Framework for Orientation and Photogrammetry
 MIT EECS
, 2000
"... There is great demand today for realtime computer vision systems, with applications including image enhancement, target detection and surveillance, autonomous navigation, and scene reconstruction. These operations generally require extensive computing power; when multiple conventional processors an ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
There is great demand today for realtime computer vision systems, with applications including image enhancement, target detection and surveillance, autonomous navigation, and scene reconstruction. These operations generally require extensive computing power; when multiple conventional processors and custom gate arrays are inappropriate, due to either excessive cost or risk, a class of devices known as FieldProgrammable Gate Arrays (FPGAs) can be employed. FPGAs offer the flexibility of a programmable solution and nearly the performance of a custom gate array. When implementing a custom algorithm in an FPGA, one must be more efficient than with a gate array technology. By tailoring the algorithms, architectures, and precisions, the gate count of an algorithm may be sufficiently reduced to fit into an FPGA. The challenge is to perform this customization of the algorithm, while still maintaining the required performance. The techniques required to perform algorithmic optimization for FPGAs are scattered across many fields; what is currently lacking is a framework for utilizing all these well known and developing techniques. The purpose of this thesis is to develop