Results 1  10
of
29
Structure from motion causally integrated over time
 IEEE Trans Pattern Analysis & Machine Intelligence
"... AbstractÐWe describe an algorithm for reconstructing threedimensional structure and motion causally, in real time from monocular sequences of images. We prove that the algorithm is minimal and stable, in the sense that the estimation error remains bounded with probability one throughout a sequence ..."
Abstract

Cited by 79 (4 self)
 Add to MetaCart
AbstractÐWe describe an algorithm for reconstructing threedimensional structure and motion causally, in real time from monocular sequences of images. We prove that the algorithm is minimal and stable, in the sense that the estimation error remains bounded with probability one throughout a sequence of arbitrary length. We discuss a scheme for handling occlusions �point features appearing and disappearing) and drift in the scale factor. These issues are crucial for the algorithm to operate in real time on real scenes. We describe in detail the implementation of the algorithm, which runs on a personal computer and has been made available to the community. We report the performance of our implementation on a few representative long sequences of real and synthetic images. The algorithm, which has been tested extensively over the course of the past few years, exhibits honest performance when the scene contains at least 2040 points with high contrast, when the relative motion is ªslowº compared to the sampling frequency of the frame grabber �30Hz), and the lens aperture is ªlarge enoughº �typically more than 30 o of visual field). Index TermsÐStructure from motion, realtime vision, shape, geometry. æ 1
3D photography on your desk
, 1998
"... A simple and inexpensive approach for extracting the threedimensional shape of objects is presented. It is based on `weak structured lighting'; it differs from other conventional structured lighting approaches in that it requires very little hardware besides the camera: a desklamp, a pencil and a ..."
Abstract

Cited by 67 (4 self)
 Add to MetaCart
A simple and inexpensive approach for extracting the threedimensional shape of objects is presented. It is based on `weak structured lighting'; it differs from other conventional structured lighting approaches in that it requires very little hardware besides the camera: a desklamp, a pencil and a checkerboard. The camera faces the object, which is illuminated by the desklamp. The user moves a pencil in front of the light source casting a moving shadow on the object. The 3D shape of the object is extracted from the spatial and temporal location of the observed shadow. Experimental results are presented on three different scenes demonstrating that the error in reconstructing the surface is less than 1%. 1 Introduction and Motivation One of the most valuable functions of our visual system is informing us about the shape of the objects that surround us. Manipulation, recognition, and navigation are amongst the tasks that we can better accomplish by seeing shape. Everfaster computers, ...
Stereoscopic Segmentation
, 2001
"... We cast the problem of multiframe stereo reconstruction of a smooth shape as the global region segmentation of a collection of images of the scene. Dually, the problem of segmenting multiple calibrated images of an object becomes that of estimating the solid shape that gives rise to such images. We ..."
Abstract

Cited by 66 (17 self)
 Add to MetaCart
We cast the problem of multiframe stereo reconstruction of a smooth shape as the global region segmentation of a collection of images of the scene. Dually, the problem of segmenting multiple calibrated images of an object becomes that of estimating the solid shape that gives rise to such images. We assume that the radiance has smooth statistics. This assumption covers Lambertian scenes with smooth or constant albedo as well as fine homogeneous textures, which are known challenges to stereo algorithms based on local correspondence. We pose the segmentation problem within a variational framework, and use fast level set methods to approximate the optimal solution numerically. Our algorithm does not work in the presence of strong textures, where traditional reconstruction algorithms do. It enjoys significant robustness to noise under the assumptions it is designed for. 1
A geometric approach to shape from defocus
 IEEE Trans. Pattern Anal. Mach. Intell
, 2005
"... Abstract—We introduce a novel approach to shape from defocus, i.e., the problem of inferring the threedimensional (3D) geometry of a scene from a collection of defocused images. Typically, in shape from defocus, the task of extracting geometry also requires deblurring the given images. A common app ..."
Abstract

Cited by 43 (1 self)
 Add to MetaCart
Abstract—We introduce a novel approach to shape from defocus, i.e., the problem of inferring the threedimensional (3D) geometry of a scene from a collection of defocused images. Typically, in shape from defocus, the task of extracting geometry also requires deblurring the given images. A common approach to bypass this task relies on approximating the scene locally by a plane parallel to the image (the socalled equifocal assumption). We show that this approximation is indeed not necessary, as one can estimate 3D geometry while avoiding deblurring without strong assumptions on the scene. Solving the problem of shape from defocus requires modeling how light interacts with the optics before reaching the imaging surface. This interaction is described by the socalled point spread function (PSF). When the form of the PSF is known, we propose an optimal method to infer 3D geometry from defocused images that involves computing orthogonal operators which are regularized via functional singular value decomposition. When the form of the PSF is unknown, we propose a simple and efficient method that first learns a set of projection operators from blurred images and then uses these operators to estimate the 3D geometry of the scene from novel blurred images. Our experiments on both real and synthetic images show that the performance of the algorithm is relatively insensitive to the form of the PSF. Our general approach is to minimize the Euclidean norm of the difference between the estimated images and the observed images. The method is geometric in that we reduce the minimization to performing projections onto linear subspaces, by using inner product structures on both infinite and finitedimensional Hilbert spaces. Both proposed algorithms involve only simple matrixvector multiplications which can be implemented in realtime. Index Terms—Shape from defocus, depth from defocus, blind deconvolution, image processing, deblurring, shape, 3D reconstruction, shape estimation, image restoration, learning subspaces. 1
Reducing "Structure From Motion": a General Framework for Dynamic Vision  Part 2: Experimental Evaluation
 IEEE trans. PAMI
, 1998
"... A number of methods have been proposed in the literature for estimating scenestructure and egomotion from a sequence of images using dynamical models. Despite the fact that all methods may be derived from a "natural " dynamical model within a unified framework, from an engineering perspec ..."
Abstract

Cited by 33 (2 self)
 Add to MetaCart
A number of methods have been proposed in the literature for estimating scenestructure and egomotion from a sequence of images using dynamical models. Despite the fact that all methods may be derived from a "natural " dynamical model within a unified framework, from an engineering perspective there are a number of tradeoffs that lead to different strategies depending upon the applications and the goals one is targeting. We want to characterize and compare the properties of each model such that the engineer may choose the one best suited to the specific application. We analyze the properties of filters derived from each dynamical model under a variety of experimental conditions, assess the accuracy of the estimates, their robustness to measurement noise, sensitivity to initial conditions and visual angle, effects of the basrelief ambiguity and occlusions, dependence upon the number of image measurements and their sampling rate.
Optimal Structure From Motion: Local Ambiguities and Global Estimates
, 1998
"... "Structure From Motion" (SFM) refers to the problem of estimating threedimensional information about the environment from the motion of its twodimensional projection onto a surface (for instance the retina). We present an analysis of SFM from the point of view of noise. This analysis results in al ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
"Structure From Motion" (SFM) refers to the problem of estimating threedimensional information about the environment from the motion of its twodimensional projection onto a surface (for instance the retina). We present an analysis of SFM from the point of view of noise. This analysis results in algorithms that are provably convergent and provably optimal with respect to a chosen norm. In particular, we cast SFM as a nonlinear optimization problem and define a bilinear projection iteration that converges to fixed points of a certain costfunction. We then show that such fixed points are "fundamental", i.e. intrinsic to the problem of SFM and not an artifact introduced by our algorithms. We classify and interpret geometrically local extrema, and we argue that they correspond to phenomena observed in visual psychophysics. Finally, we show under what conditions it is possible  given convergence to a local extremum  to "jump" to the valley containing the optimum; this leads us to sugges...
Estimation of 3D surface shape and smooth radiance from 2D images: A level set approach
 JOURNAL OF SCIENTIFIC COMPUTING
, 2003
"... ..."
Optimal Structure from Motion: Local Ambiguities and Global Estimates
, 2000
"... “Structure From Motion” (SFM) refers to the problem of estimating spatial properties of a threedimensional scene from the motion of its projection onto a twodimensional surface, such as the retina. We present an analysis of SFM which results in algorithms that are provably convergent and provably o ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
“Structure From Motion” (SFM) refers to the problem of estimating spatial properties of a threedimensional scene from the motion of its projection onto a twodimensional surface, such as the retina. We present an analysis of SFM which results in algorithms that are provably convergent and provably optimal with respect to a chosen norm. In particular, we cast SFM as the minimization of a highdimensional quadratic cost function, and show how it is possible to reduce it to the minimization of a twodimensional function whose stationary points are in onetoone correspondence with those of the original cost function. As a consequence, we can plot the reduced cost function and characterize the configurations of structure and motion that result in local minima. As an example, we discuss two local minima that are associated with wellknown visual illusions. Knowledge of the topology of the residual in the presence of such local minima allows us to formulate minimization algorithms that, in addition to provably converge to stationary points of the original cost function, can switch between different local extrema in order to converge to the global minimum, under suitable conditions. We also offer an experimental study of the distribution of the estimation error in the presence of noise in the measurements, and characterize the sensitivity of the algorithm using the structure of Fisher’s Information matrix.
3D Motion and Structure from 2D Motion Causally Integrated over Time: Implementation
 In IEEE Trans. Robotics and Automation
, 2000
"... The causal estimation of threedimensional motion from a sequence of twodimensional images can be posed as a nonlinear filtering problem. We describe the implementation of an algorithm whose uniform observability, minimal realization and stability have been proven analytically in [5]. We discuss a ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
The causal estimation of threedimensional motion from a sequence of twodimensional images can be posed as a nonlinear filtering problem. We describe the implementation of an algorithm whose uniform observability, minimal realization and stability have been proven analytically in [5]. We discuss a scheme for handling occlusions, drift in the scale factor and tuning of the lter. We also present an extension to partially calibrated camera models and prove its observability. We report the performance of our implementation on a few long sequences of real images. More importantly, however, we have made our realtime implementation  which runs on a personal computer  available to the public for firsthand testing.
Recursive 3D Visual Motion Estimation Using Subspace Constraints
"... The 3D motion of a camera within a static environment produces a sequence of timevarying images that can be used for reconstructing the relative motion between the scene and the viewer. The problem of reconstructing rigid motion from a sequence of perspective images may be characterized as the est ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
The 3D motion of a camera within a static environment produces a sequence of timevarying images that can be used for reconstructing the relative motion between the scene and the viewer. The problem of reconstructing rigid motion from a sequence of perspective images may be characterized as the estimation of the state of a nonlinear dynamical system, which is defined by the rigidity constraint and the perspective measurement map. The timederivative of the measured output of such a system, which is called the "2D motion field" and is approximated by the "optical flow", is bilinear in the motion parameters, and may be used to specify a subspace constraint on the direction of heading independent of rotation and depth, and a pseudomeasurement for the rotational velocity as a function of the estimated heading. The subspace constraint may be viewed as an implicit dynamical model with parameters on a differentiable manifold, and the visual motion estimation problem may be cast in a system...