Results 1  10
of
25
Epipolarplane image analysis: An approach to determining structure from motion
 Intern..1. Computer Vision
, 1987
"... We present a technique for building a threedimensional description of a static scene from a dense sequence of images. These images are taken in such rapid succession that they form a solid block of data in which the temporal continuity from image to image is approximately equal to the spatial conti ..."
Abstract

Cited by 224 (3 self)
 Add to MetaCart
(Show Context)
We present a technique for building a threedimensional description of a static scene from a dense sequence of images. These images are taken in such rapid succession that they form a solid block of data in which the temporal continuity from image to image is approximately equal to the spatial continuity in an individual image. The technique utilizes knowledge of the camera motion to form and analyze slices of this solid. These slices directly encode not only the threedimensional positions of objects, but also such spatiotemporal events as the occlusion of one object by another. For straightline camera motions, these slices have a simple linear structure that makes them easier to analyze. The analysis computes the threedimensional positions of object features, marks occlusion boundaries on the objects, and builds a threedimensional map of "free space. " In our article, we first describe the application of this technique to a simple camera motion, and then show how projective duality is used to extend the analysis to a wider class of camera motions and object types that include curved and moving objects. 1
Linear and incremental acquisition of invariant shape models from image sequences
 Proc. Fourth Int’l Conj on Computer Vision
, 1993
"... We show how to automatically acquire a Euclidean shape representations of objects from noisy image sequences under weak perspective. The proposed method is linear and incremental, requiring no more than pseudoinverse. A nonlinear but numerically sound preprocessing stage is added to improve the a ..."
Abstract

Cited by 55 (8 self)
 Add to MetaCart
(Show Context)
We show how to automatically acquire a Euclidean shape representations of objects from noisy image sequences under weak perspective. The proposed method is linear and incremental, requiring no more than pseudoinverse. A nonlinear but numerically sound preprocessing stage is added to improve the accuracy of the results even further. Experiments show that attention to noise and computational techniques improve the shape results substantially with respect to previous methods proposed for ideal images.
Recovering Heading for VisuallyGuided Navigation
 Vision Research
, 1991
"... We present a model for recovering the direction of heading of an observer who is moving relative to a scene that may contain selfmoving objects. The model builds upon an algorithm proposed by Rieger and Lawton (1985), which is based on earlier work by LonguetHiggins and Prazdny (1981). The algo ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
We present a model for recovering the direction of heading of an observer who is moving relative to a scene that may contain selfmoving objects. The model builds upon an algorithm proposed by Rieger and Lawton (1985), which is based on earlier work by LonguetHiggins and Prazdny (1981). The algorithm uses velocity differences computed in regions of high depth variation to estimate the location of the .focus o.f ezpansion, which indicates the observer's heading direction. We relate the behavior of the proposed model to psychophysical observations regarding the ability of human observers to judge their heading direction, and show how the model can cope with self moving objects in the environment. We also discuss this model in the broader context of a navigational system that performs tasks requiring rapid sensing and response through the interaction of simple taskspecific routines.
Extracting Structure from Optical Flow Using the Fast Error Search Technique
 International Journal of Computer Vision
, 1998
"... In this paper, we present a robust and computationally efficient technique for estimating the focus of expansion (FOE) of an optical flow field, using fast partial search. For each candidate location on a discrete sampling of the image area, we generate a linear system of equations for determining ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we present a robust and computationally efficient technique for estimating the focus of expansion (FOE) of an optical flow field, using fast partial search. For each candidate location on a discrete sampling of the image area, we generate a linear system of equations for determining the remaining unknowns, viz. rotation and inverse depth. We compute the least squares error of the system without actually solving the equations, to generate an error surface that describes the goodness of fit across the hypotheses. Using Fourier techniques, we prove that given an N \Theta N flow field, the FOE can be estimated in O(N 2 log N) operations. Since the resulting system is linear, bounded perturbations in the data lead to bounded errors. We support the theoretical development and proof of our algorithm with experiments on synthetic and real data. Through a series of experiments on synthetic data, we prove the correctness, robustness and operating envelope of our algorithm. We d...
The Feasibility of Motion and Structure from Noisy TimeVa.rying Image Velocity Information
 International Journal of Computer Vision
, 1990
"... This research addresses the problem of noise sensitivity inherent in motion and structure algorithms. The motion and structure paradigm is a twostep rocess. First, we measure image velocities and, perhaps, their spatial and temporal derivatives, are obtained from timevarying image intensity data a ..."
Abstract

Cited by 18 (8 self)
 Add to MetaCart
(Show Context)
This research addresses the problem of noise sensitivity inherent in motion and structure algorithms. The motion and structure paradigm is a twostep rocess. First, we measure image velocities and, perhaps, their spatial and temporal derivatives, are obtained from timevarying image intensity data and second, we use these data to compute the motion of a moving monocular observer in a stationary environment under perspective projection, relative to a single 3D planar surface. The first contribution of this article is an algorithm that uses timevarying image velocity information to compute the observer's translation and rotation and the normalized surface gradient of the 3D planar surface. The use of timevarying image velocity information is an important tool in obtaining a more robust motion and structure calculation. The second contribution of this article is an extensive error analysis of the motion and structure problem. Any motion and structure algorithm that uses image velocity information as its input should exhibit error sensitivity behavior compatible with the results reported here. We perform an average and worst case error analysis for four types of image velocity information: full and normal image velocities and full and normal sets of image velocity and its derivatives. (These derivatives are simply the coefficients of a truncated Taylor series expansion about some point in space and time.) The main issues we address here are: just how sensitive is a motion and structure computation i the presence of noisy input, or alternately, how accu
The perceptual buildup of threedimensional structure from motion
 Perception & Psychophysics
, 1990
"... This report describes research done within the Artificial Intelligence Laboratory and the Center for Biological Information Processing (Whitaker College) at the Massachusetts Institute of Technology. Support for the A.I. Laboratory 's artificial intelligence research is provided in part by the ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
(Show Context)
This report describes research done within the Artificial Intelligence Laboratory and the Center for Biological Information Processing (Whitaker College) at the Massachusetts Institute of Technology. Support for the A.I. Laboratory 's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N0001485K0124. Support for this research is also provided by the Alfred P. Sloan Foundation, the Office of Naval Research, Cognitive and Neural Systems Division, the National Science Foundation and the McDonnell Foundation
Segmentation from Motion: Combining Gabor and MallatWavelets to Overcome the Aperture and Correspondence Problems
 Pattern Recognition
, 1999
"... A new method for segmentation from motion is presented, which is designed to be part of a general objectrecognition system. The key idea is to integrate information from Gabor and Mallatwavelet transforms of an image sequence to overcome the aperture and the correspondence problem. It is assumed ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
(Show Context)
A new method for segmentation from motion is presented, which is designed to be part of a general objectrecognition system. The key idea is to integrate information from Gabor and Mallatwavelet transforms of an image sequence to overcome the aperture and the correspondence problem. It is assumed that objects move frontoparallel. Gaborwavelet responses allow accurate estimation of image ow vectors with low spatial resolution. A histogram over this image ow eld is evaluated and its local maxima provide a set of motion hypotheses. These serve to reduce the correspondence problem occurring in utilizing the Mallatwavelet transform, which provides the required high spatial resolution in segmentation. Segmentation reliability is improved by integration over time. The system can segment several small, disconnected, and openworked objects, such as dot patterns. Several examples demonstrate the performance of the system and show that the algorithm behaves reasonably well, even if the as...
Basic Visual Capabilities
, 1993
"... tive Vision and especially Prof. Ruzena Bajcsy, Henrik Christianssen, Prof. Jim Crowley, Prof. Randal Nelson and Prof. Giulio Sandini were most useful in the development of my ideas. The help of Kourosh Pahlavan and Prof. JanOlof Eklundh in gathering image data with the KTHhead is highly appreciat ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
tive Vision and especially Prof. Ruzena Bajcsy, Henrik Christianssen, Prof. Jim Crowley, Prof. Randal Nelson and Prof. Giulio Sandini were most useful in the development of my ideas. The help of Kourosh Pahlavan and Prof. JanOlof Eklundh in gathering image data with the KTHhead is highly appreciated. Especially I would like to thank my family, Willibald and Dietlinde, Barbara, Elke, Wolfgang and Magdalena for their love and support throughout the years. This work would not have been possible without the generous support of the Osterreichisches Bundesministerium fur Wissenschaft und Forschung, the Osterreichische Bundekammer der Gewerblichen Wirtschaft and the Directorate of Robotics and Machine Intelligence of the National Science Foundation. i Contents 1 Introduction 1 1.1 Classical computer vision : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 1.2 The state of the art : : : : : : : : : : : : : : : : : : : : : : : : : : :
ThreeDimensional EgoMotion Estimation From Motion Fields Observed With Multiple Cameras
"... In this paper, we present a robust method to estimate the threedimensional egomotion of an observer moving in a static environment. This method combines the optical flow fields observed with multiple cameras to avoid the ambiguity of 3D motion recovery due to small field of view and small depth var ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In this paper, we present a robust method to estimate the threedimensional egomotion of an observer moving in a static environment. This method combines the optical flow fields observed with multiple cameras to avoid the ambiguity of 3D motion recovery due to small field of view and small depth variation in the field of view. Two residual functions are proposed to estimate the egomotion for different situations. In the nondegenerate case, both the direction and the scale of the threedimensional rotation and translation can be obtained. In the degenerate case, rotation can still be obtained but translation can only be obtained up to a scale factor. Both the number of cameras and the camera placement affect the accuracy of the estimated egomotion. We compare different camera configurations through simulation. Some results of realworld experiments are also given to demonstrate the benefits of our method. Key words: Egomotion estimation; Multiple sensors; Optical flow 1 Introducti...