Results 1  10
of
395
Shape and motion from image streams under orthography: a factorization method
 INTERNATIONAL JOURNAL OF COMPUTER VISION
, 1992
"... Inferring scene geometry and camera motion from a stream of images is possible in principle, but is an illconditioned problem when the objects are distant with respect to their size. We have developed a factorization method that can overcome this difficulty by recovering shape and motion under orth ..."
Abstract

Cited by 1082 (38 self)
 Add to MetaCart
Inferring scene geometry and camera motion from a stream of images is possible in principle, but is an illconditioned problem when the objects are distant with respect to their size. We have developed a factorization method that can overcome this difficulty by recovering shape and motion under orthography without computing depth as an intermediate step. An image stream can be represented by the 2FxP measurement matrix of the image coordinates of P points tracked through F frames. We show that under orthographic projection this matrix is of rank 3. Based on this observation, the factorization method uses the singularvalue decomposition technique to factor the measurement matrix into two matrices which represent object shape and camera rotation respectively. Two of the three translation components are computed in a preprocessing stage. The method can also handle and obtain a full solution from a partially filledin measurement matrix that may result from occlusions or tracking failures. The method gives accurate results, and does not introduce smoothing in either shape or motion. We demonstrate this with a series of experiments on laboratory and outdoor image streams, with and without occlusions.
A Robust Technique for Matching Two Uncalibrated Images Through the Recovery of the Unknown Epipolar Geometry
, 1994
"... ..."
(Show Context)
An Efficient Solution to the FivePoint Relative Pose Problem
, 2004
"... An efficient algorithmic solution to the classical fivepoint relative pose problem is presented. The problem is to find the possible solutions for relative camera pose between two calibrated views given five corresponding points. The algorithm consists of computing the coefficients of a tenth degre ..."
Abstract

Cited by 472 (12 self)
 Add to MetaCart
An efficient algorithmic solution to the classical fivepoint relative pose problem is presented. The problem is to find the possible solutions for relative camera pose between two calibrated views given five corresponding points. The algorithm consists of computing the coefficients of a tenth degree polynomial in closed form and subsequently finding its roots. It is the first algorithm well suited for numerical implementation that also corresponds to the inherent complexity of the problem. We investigate the numerical precision of the algorithm. We also study its performance under noise in minimal as well as overdetermined cases. The performance is compared to that of the well known 8 and 7point methods and a 6point scheme. The algorithm is used in a robust hypothesizeandtest framework to estimate structure and motion in realtime with low delay. The realtime system uses solely visual input and has been demonstrated at major conferences.
Determining the Epipolar Geometry and its Uncertainty: A Review
 International Journal of Computer Vision
, 1998
"... Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two i ..."
Abstract

Cited by 400 (9 self)
 Add to MetaCart
Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3&times;3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, and its determination is very important in many applications such as scene modeling and vehicle navigation. This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and its uncertainty. A wellfounded measure is proposed to compare these techniques. Projective reconstruction is also reviewed. The software which we have developed for this review is available on the Internet.
The Computation of Optical Flow
, 1995
"... Twodimensional image motion is the projection of the threedimensional motion of objects, relative to a visual sensor, onto its image plane. Sequences of timeordered images allow the estimation of projected twodimensional image motion as either instantaneous image velocities or discrete image dis ..."
Abstract

Cited by 293 (10 self)
 Add to MetaCart
(Show Context)
Twodimensional image motion is the projection of the threedimensional motion of objects, relative to a visual sensor, onto its image plane. Sequences of timeordered images allow the estimation of projected twodimensional image motion as either instantaneous image velocities or discrete image displacements. These are usually called the optical flow field or the image velocity field. Provided that optical flow is a reliable approximation to twodimensional image motion, it may then be used to recover the threedimensional motion of the visual sensor (to within a scale factor) and the threedimensional surface structure (shape or relative depth) through assumptions concerning the structure of the optical flow field, the threedimensional environment and the motion of the sensor. Optical flow may also be used to perform motion detection, object segmentation, timetocollision and focus of expansion calculations, motion compensated encoding and stereo disparity measurement. We investiga...
A Paraperspective Factorization Method for Shape and Motion Recovery
, 1997
"... The factorization method, first developed by Tomasi and Kanade, recovers both the shape of an object and its motion from a sequence of images, using many images and tracking many feature points to obtain highly redundant feature position information. The method robustly processes the feature traject ..."
Abstract

Cited by 288 (13 self)
 Add to MetaCart
The factorization method, first developed by Tomasi and Kanade, recovers both the shape of an object and its motion from a sequence of images, using many images and tracking many feature points to obtain highly redundant feature position information. The method robustly processes the feature trajectory information using singular value decomposition (SVD), taking advantage of the linear algebraic properties of orthographic projection. However, an orthographic formulation limits the range of motions the method can accommodate. Paraperspective projection, first introduced by Ohta, is a projection model that closely approximates perspective projection by modeling several effects not modeled under orthographic projection, while retaining linear algebraic properties. Our paraperspective factorization method can be applied to a much wider range of motion scenarios, including image sequences containing motion toward the camera and aerial image sequences of terrain taken from a lowaltitude airplane. Index TermsMotion analysis, shape recovery, factorization method, threedimensional vision, image sequence analysis, singular value decomposition.  F  1I NTRODUCTION ECOVERING the geometry of a scene and the motion of the camera from a stream of images is an important task in a variety of applications, including navigation, robotic manipulation, and aerial cartography. While this is possible in principle, traditional methods have failed to produce reliable results in many situations [2]. Tomasi and Kanade [13], [14] developed a robust and efficient method for accurately recovering the shape and motion of an object from a sequence of images, called the factorization method. It achieves its accuracy and robustness by ...
The Fundamental matrix: theory, algorithms, and stability analysis
 International Journal of Computer Vision
, 1995
"... In this paper we analyze in some detail the geometry of a pair of cameras, i.e. a stereo rig. Contrarily to what has been done in the past and is still done currently, for example in stereo or motion analysis, we do not assume that the intrinsic parameters of the cameras are known (coordinates of th ..."
Abstract

Cited by 271 (13 self)
 Add to MetaCart
In this paper we analyze in some detail the geometry of a pair of cameras, i.e. a stereo rig. Contrarily to what has been done in the past and is still done currently, for example in stereo or motion analysis, we do not assume that the intrinsic parameters of the cameras are known (coordinates of the principal points, pixels aspect ratio and focal lengths). This is important for two reasons. First, it is more realistic in applications where these parameters may vary according to the task (active vision). Second, the general case considered here, captures all the relevant information that is necessary for establishing correspondences between two pairs of images. This information is fundamentally projective and is hidden in a confusing manner in the commonly used formalism of the Essential matrix introduced by LonguetHiggins [40]. This paper clarifies the projective nature of the correspondence problem in stereo and shows that the epipolar geometry can be summarized in one 3 \Theta 3 ma...
The development and comparison of robust methods for estimating the fundamental matrix
 International Journal of Computer Vision
, 1997
"... Abstract. This paper has two goals. The first is to develop a variety of robust methods for the computation of the Fundamental Matrix, the calibrationfree representation of camera motion. The methods are drawn from the principal categories of robust estimators, viz. case deletion diagnostics, Mest ..."
Abstract

Cited by 263 (10 self)
 Add to MetaCart
(Show Context)
Abstract. This paper has two goals. The first is to develop a variety of robust methods for the computation of the Fundamental Matrix, the calibrationfree representation of camera motion. The methods are drawn from the principal categories of robust estimators, viz. case deletion diagnostics, Mestimators and random sampling, and the paper develops the theory required to apply them to nonlinear orthogonal regression problems. Although a considerable amount of interest has focussed on the application of robust estimation in computer vision, the relative merits of the many individual methods are unknown, leaving the potential practitioner to guess at their value. The second goal is therefore to compare and judge the methods. Comparative tests are carried out using correspondences generated both synthetically in a statistically controlled fashion and from feature matching in real imagery. In contrast with previously reported methods the goodness of fit to the synthetic observations is judged not in terms of the fit to the observations per se but in terms of fit to the ground truth. A variety of error measures are examined. The experiments allow a statistically satisfying and quasioptimal method to be synthesized, which is shown to be stable with up to 50 percent outlier contamination, and may still be used if there are more than 50 percent outliers. Performance bounds are established for the method, and a variety of robust methods to estimate the standard deviation of the error and covariance matrix of the parameters are examined. The results of the comparison have broad applicability to vision algorithms where the input data are corrupted not only by noise but also by gross outliers.
Epipolarplane image analysis: An approach to determining structure from motion
 INTERN..1. COMPUTER VISION
, 1987
"... We present a technique for building a threedimensional description of a static scene from a dense sequence of images. These images are taken in such rapid succession that they form a solid block of data in which the temporal continuity from image to image is approximately equal to the spatial conti ..."
Abstract

Cited by 251 (3 self)
 Add to MetaCart
(Show Context)
We present a technique for building a threedimensional description of a static scene from a dense sequence of images. These images are taken in such rapid succession that they form a solid block of data in which the temporal continuity from image to image is approximately equal to the spatial continuity in an individual image. The technique utilizes knowledge of the camera motion to form and analyze slices of this solid. These slices directly encode not only the threedimensional positions of objects, but also such spatiotemporal events as the occlusion of one object by another. For straightline camera motions, these slices have a simple linear structure that makes them easier to analyze. The analysis computes the threedimensional positions of object features, marks occlusion boundaries on the objects, and builds a threedimensional map of "free space." In our article, we first describe the application of this technique to a simple camera motion, and then show how projective duality is used to extend the analysis to a wider class of camera motions and object types that include curved and moving objects.
Shape and motion from image streams: a factorization method  Part 3. Detection and Tracking . . .
 INTERNATIONAL JOURNAL OF COMPUTER VISION
, 1991
"... The factorization method described in this series of reports requires an algorithm to track the motion of features in an image stream. Given the small interframe displacement made possible by the factorization approach, the best tracking method turns out to be the one proposed by Lucas and Kanade i ..."
Abstract

Cited by 217 (13 self)
 Add to MetaCart
The factorization method described in this series of reports requires an algorithm to track the motion of features in an image stream. Given the small interframe displacement made possible by the factorization approach, the best tracking method turns out to be the one proposed by Lucas and Kanade in 1981. The method defines the measure of match between fixedsize feature windows in the past and current frame as the sum of squared intensity differences over the windows. The displacement is then defined as the one that minimizes this sum. For small motions, a linearization of the image intensities leads to a NewtonRaphson style minimization. In this report, after rederiving the method in a physically intuitive way, we answer the crucial question of how to choose the feature windows that are best suited for tracking. Our selection criterion is based directly on the definition of the tracking algorithm, and expresses how well a feature can be tracked. As a result, the criterion is optimal by construction. We show by experiment that the performance of both the selection and the tracking algorithm are adequate for our factorization method, and we address the issue of how to detect occlusions. In the conclusion, we point out specific open questions for future research.