Results 1  10
of
30
Determining Optical Flow
 ARTIFICIAL INTELLIGENCE
, 1981
"... Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent veloc ..."
Abstract

Cited by 1736 (7 self)
 Add to MetaCart
Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. An iterative implementation is shown which successfully computes the optical flow for a number of synthetic image sequences. The algorithm is robust in that it can handle image sequences that are quantized rather coarsely in space and time. It is also insensitive to quantization of brightness levels and additive noise. Examples are included where the assumption of smoothness is violated at singular points or along lines in the image.
Epipolarplane image analysis: An approach to determining structure from motion
 Intern..1. Computer Vision
, 1987
"... We present a technique for building a threedimensional description of a static scene from a dense sequence of images. These images are taken in such rapid succession that they form a solid block of data in which the temporal continuity from image to image is approximately equal to the spatial conti ..."
Abstract

Cited by 208 (3 self)
 Add to MetaCart
We present a technique for building a threedimensional description of a static scene from a dense sequence of images. These images are taken in such rapid succession that they form a solid block of data in which the temporal continuity from image to image is approximately equal to the spatial continuity in an individual image. The technique utilizes knowledge of the camera motion to form and analyze slices of this solid. These slices directly encode not only the threedimensional positions of objects, but also such spatiotemporal events as the occlusion of one object by another. For straightline camera motions, these slices have a simple linear structure that makes them easier to analyze. The analysis computes the threedimensional positions of object features, marks occlusion boundaries on the objects, and builds a threedimensional map of "free space. " In our article, we first describe the application of this technique to a simple camera motion, and then show how projective duality is used to extend the analysis to a wider class of camera motions and object types that include curved and moving objects. 1
Maximizing Rigidity: The Incremental Recovery Of 3D Structure From Rigid And . . .
 Perception
, 1983
"... The human visual system can extract 3D shape information of unfamiliar moving objects from their projected transformations. Computational studies of this capacity have established that 3D shape, can be extracted correctly from a brief presentation, provided that the moving objects are rigid. The ..."
Abstract

Cited by 77 (1 self)
 Add to MetaCart
The human visual system can extract 3D shape information of unfamiliar moving objects from their projected transformations. Computational studies of this capacity have established that 3D shape, can be extracted correctly from a brief presentation, provided that the moving objects are rigid. The human visual system requires a longer temporal extension, but it can cope, however, with considerable deviations from rigidity. It is shown how the 3D structure of rigid and nonrigid objects can be recovered by maintaining an internal model of the viewed object and modifying it at each instant by the minimal nonrigid change that is sufficient to account for the observed transformation. The results of applying this incremental rigidity scheme to rigid and nonrigid objects in motion are described and compared with human perceptions.
Comparison of Approaches to Egomotion Computation
 In CVPR
, 1996
"... We evaluated six algorithms for computing egomotion from image velocities. We established benchmarks for quantifying bias and sensitivity to noise, and for quantifying the convergence properties of those algorithms that require numerical search. Our simulation results reveal some interesting and sur ..."
Abstract

Cited by 59 (0 self)
 Add to MetaCart
We evaluated six algorithms for computing egomotion from image velocities. We established benchmarks for quantifying bias and sensitivity to noise, and for quantifying the convergence properties of those algorithms that require numerical search. Our simulation results reveal some interesting and surprising results. First, it is often written in the literature that the egomotion problem is difficult because translation (e.g., along the Xaxis) and rotation (e.g., about the Yaxis) produce similar image velocities. We found, to the contrary, that the bias and sensitivity of our six algorithms are totally invariant with respect to the axis of rotation. Second, it is also believed by some that fixating helps to make the egomotion problem easier. We found, to the contrary, that fixating does not help when the noise is independent of the image velocities. Fixation does help if the noise is proportional to speed, but this is only for the trivial reason that the speeds are slower under fixatio...
Entropybased Gaze Planning
 Image and Vision Computing
, 1999
"... This paper describes an algorithm for recognizing known objects in an unstructured environment (e.g. landmarks) from measurements acquired with a single monochrome television camera mounted on a mobile observer. The approach is based on the concept of an entropy map, which is used to guide the mobi ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
This paper describes an algorithm for recognizing known objects in an unstructured environment (e.g. landmarks) from measurements acquired with a single monochrome television camera mounted on a mobile observer. The approach is based on the concept of an entropy map, which is used to guide the mobile observer along an optimal trajectory that minimizes the ambiguity of recognition as well as the amount of data that must be gathered. Recognition itself is based on the optical flow signatures that result from the camera motion  signatures that are inherently ambiguous due to the confounding of motion, structure and imaging parameters. We show how gaze planning partially alleviates this problem by generating trajectories that maximize discriminability. A sequential Bayes approach is used to handle the remaining ambiguity by accumulating evidence for different object hypotheses over time until a clear assertion can be made. Results from an experimental recognition system using a gantrymounted television camera are presented to show the effectiveness of the algorithm on a large class of common objects. 1
Observability of 3D Motion
 INTERNATIONAL JOURNAL OF COMPUTER VISION
, 2000
"... This paper examines the inherent difficulties in observing 3D rigid motion from image sequences. It does so without considering a particular estimator. Instead, it presents a statistical analysis of all the possible computational models which can be used for estimating 3D motion from an image sequen ..."
Abstract

Cited by 21 (13 self)
 Add to MetaCart
This paper examines the inherent difficulties in observing 3D rigid motion from image sequences. It does so without considering a particular estimator. Instead, it presents a statistical analysis of all the possible computational models which can be used for estimating 3D motion from an image sequence. These computational models are classified according to the mathematical constraints that they employ and the characteristics of the imaging sensor (restricted field of view and full field of view). Regarding the mathematical constraints, there exist two principles relating a sequence of images taken by a moving camera. One is the "epipolar constraint," applied to motion fields, and the other the "positive depth" constraint, applied to normal flow fields. 3D motion estimation amounts to optimizing these constraints over the image. A statistical modeling of these constraints leads to functions which are studied with regard to their topographic structure, specifically as regards the errors ...
Correspondence and affine shape from two orthographic views: Motion and Recognition
 Artificial Intelligence Laboratory, Massachusetts Institute of Technology
, 1991
"... The paper presents a simple model for recovering affine shape and correspondence from two orthographic views of a threedimensional object. The paper has two parts. In the first part it is shown that four corresponding points along two orthographic views, taken under similar illumination conditions, ..."
Abstract

Cited by 19 (8 self)
 Add to MetaCart
The paper presents a simple model for recovering affine shape and correspondence from two orthographic views of a threedimensional object. The paper has two parts. In the first part it is shown that four corresponding points along two orthographic views, taken under similar illumination conditions, determine affine shape and correspondence for all other points. In the second part it is shown that the scheme is useful for purposes of visual recognition by generating novel views of an object given two model views in full correspondence and four corresponding points between the model views and the novel view. It is also shown that the scheme can handle objects with smooth boundaries, to a good approximation, without introducing any modifications or additional model views.
On The Geometry Of Visual Correspondence
 International Journal of Computer Vision
, 1994
"... Image displacement fieldsoptical flow fields, stereo disparity fields, normal flow fieldsdue to rigid motion possess a global geometric structure which is independent of the scene in view. Motion vectors of certain lengths and directions are constrained to lie on the imaging surface at particu ..."
Abstract

Cited by 19 (12 self)
 Add to MetaCart
Image displacement fieldsoptical flow fields, stereo disparity fields, normal flow fieldsdue to rigid motion possess a global geometric structure which is independent of the scene in view. Motion vectors of certain lengths and directions are constrained to lie on the imaging surface at particular loci whose location and form depends solely on the 3D motion parameters. If optical flow fields or stereo disparity fields are considered, then equal vectors are shown to lie on conic sections. Similarly, for normal motion fields, equal vectors lie within regions whose boundaries also constitute conics. By studying various properties of these curves and regions and their relationships, a characterization of the structure of rigid motion fields is given. The goal of this paper is to introduce a concept underlying the global structure of image displacement fields. This concept gives rise to various constraints that could form the basis of algorithms for the recovery of visual information f...
Recovering Heading for VisuallyGuided Navigation
 Vision Research
, 1991
"... We present a model for recovering the direction of heading of an observer who is moving relative to a scene that may contain selfmoving objects. The model builds upon an algorithm proposed by Rieger and Lawton (1985), which is based on earlier work by LonguetHiggins and Prazdny (1981). The algo ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
We present a model for recovering the direction of heading of an observer who is moving relative to a scene that may contain selfmoving objects. The model builds upon an algorithm proposed by Rieger and Lawton (1985), which is based on earlier work by LonguetHiggins and Prazdny (1981). The algorithm uses velocity differences computed in regions of high depth variation to estimate the location of the .focus o.f ezpansion, which indicates the observer's heading direction. We relate the behavior of the proposed model to psychophysical observations regarding the ability of human observers to judge their heading direction, and show how the model can cope with self moving objects in the environment. We also discuss this model in the broader context of a navigational system that performs tasks requiring rapid sensing and response through the interaction of simple taskspecific routines.
A SelfOrganizing Neural Network Architecture for Navigation Using Optic Flow
, 1998
"... This article describes a selforganizing neural network architecture that transforms optic flow and eye position information into representations of heading, scene depth, and moving object locations. These representations are used to navigate reactively in simulations involving obstacle avoidance an ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
This article describes a selforganizing neural network architecture that transforms optic flow and eye position information into representations of heading, scene depth, and moving object locations. These representations are used to navigate reactively in simulations involving obstacle avoidance and pursuit of a moving target. The network's weights are trained during an actionperception cycle in which selfgenerated eye and body movements produce optic flow information, thus allowing the network to tune itself without requiring explicit knowledge of sensor geometry. The confounding effect of eye movement during translation is suppressed by learning the relationship between eye movement outflow commands and the optic flow signals that they induce. The remaining optic flow field is due to only observer translation and independent motion of objects in the scene. A selforganizing feature map categorizes normalized translational flow patterns, thereby creating a map of cells that code heading directions. Heading information is then recombined with translational flow patterns in two different ways to form maps of scene depth and moving object locations. Most of the learning processes take place concurrently and evolve through unsupervised learning. Mapping the learned heading representations onto heading labels or motor commands requires additional structure. Simulations of the network verify its performance using both noisefree and noisy optic flow information.