Results 1  10
of
16
The Computation of Optical Flow
, 1995
"... Twodimensional image motion is the projection of the threedimensional motion of objects, relative to a visual sensor, onto its image plane. Sequences of timeordered images allow the estimation of projected twodimensional image motion as either instantaneous image velocities or discrete image dis ..."
Abstract

Cited by 216 (10 self)
 Add to MetaCart
Twodimensional image motion is the projection of the threedimensional motion of objects, relative to a visual sensor, onto its image plane. Sequences of timeordered images allow the estimation of projected twodimensional image motion as either instantaneous image velocities or discrete image displacements. These are usually called the optical flow field or the image velocity field. Provided that optical flow is a reliable approximation to twodimensional image motion, it may then be used to recover the threedimensional motion of the visual sensor (to within a scale factor) and the threedimensional surface structure (shape or relative depth) through assumptions concerning the structure of the optical flow field, the threedimensional environment and the motion of the sensor. Optical flow may also be used to perform motion detection, object segmentation, timetocollision and focus of expansion calculations, motion compensated encoding and stereo disparity measurement. We investiga...
Motion from Color
, 1995
"... The use of color images for motion estimation is investigated in this work. Beyond the straightforward approach of using the color components as separate images of the same scene, a new method, based on exploiting color invariance under motion, is discussed. Two different sets of colorrelated, loca ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
The use of color images for motion estimation is investigated in this work. Beyond the straightforward approach of using the color components as separate images of the same scene, a new method, based on exploiting color invariance under motion, is discussed. Two different sets of colorrelated, locally computable motion `invariants' are analyzed and tested in this paper, and the results of motion estimation based on them are compared to the direct use of the RGB brightness functions. 1 Introduction Optical or image flow estimation is considered by many researches as an important lowlevel stage of spatial motion recovery from a sequence of images. It is supposed to yield an estimate of the 2D projection of the velocity field on the image plane, which is submitted to further analysis aimed at inferring highlevel, 3D motion descriptions. It is well known that the image flow cannot be completely determined from a single sequence of blackandwhite images without introducing additional a...
A General Motion Model and SpatioTemporal Filters for Computing Optical Flow
 University of Maryland TR3365
, 1995
"... Traditional optical flow algorithms assume local image translational motion and apply simple image filtering. Recent studies have taken two separate approaches toward improving the accuracy of computed flow: the application of spatiotemporal filtering schemes and the use of generalized motion model ..."
Abstract

Cited by 21 (9 self)
 Add to MetaCart
Traditional optical flow algorithms assume local image translational motion and apply simple image filtering. Recent studies have taken two separate approaches toward improving the accuracy of computed flow: the application of spatiotemporal filtering schemes and the use of generalized motion models such as the affine model. Each has achieved some improvement over traditional algorithms in specialized situations. In this paper, we analyze the interdependency between them and propose a unified approach. The general motion model we adopt characterizes arbitrary 3D steady motion. Under perspective projection, we derive an image motion equation that describes the spatiotemporal relation of grayscale intensity in an image sequence, thus making the utilization of 3D filtering possible. However, to accommodate this complex motion, we need to extend the filter design to derive additional motion constraint equations. Using Hermite polynomials, we design differentiation filters, whose orthogonality and Gaussian derivative properties insure numerical stability. The resulting algorithm produces accurate optical flow and other useful motion parameters. It is evaluated quantitatively using the scheme established by Barron, et al.[4] and qualitatively with real images. 1.
Optical Flow: A Curve Evolution Approach
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 1996
"... A novel approach for the computation of optical flow based on an L type minimization is presented. It is shown that the approach has inherent advantages since it does not smooth the flowvelocity across the edges and hence preserves edge information. A numerical approach based on computation o ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
A novel approach for the computation of optical flow based on an L type minimization is presented. It is shown that the approach has inherent advantages since it does not smooth the flowvelocity across the edges and hence preserves edge information. A numerical approach based on computation of evolving curves is proposed for computing the optical flow field. Computations are carried out on a number of synthetic and real image sequences in order to illustrate the theory as well as the numerical approach.
Determining the 3D Motion of a Rigid Surface Patch, without Correspondence, under Perspective Projection
, 1985
"... A method is presented for the recovery of the 3D motion parameters of a rigidly moving textured surface. The novelty of the method is based on the following two facts: 1) no pointtopoint correspondences are used, and 2) “stereo ” and “motion ” are combined in such a way that no correspondence bet ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
A method is presented for the recovery of the 3D motion parameters of a rigidly moving textured surface. The novelty of the method is based on the following two facts: 1) no pointtopoint correspondences are used, and 2) “stereo ” and “motion ” are combined in such a way that no correspondence between the left and the right stereo pairs is required. 1.
The "Orthogonal Algorithm" For Optical Flow Detection Using Dynamic Programming
"... This paper introduces a new and original algorithm for optical flow detection. It is based on an iterative search for a displacement field that minimizes the L 1 or L 2 distance between two images. Both images are sliced into parallel and overlapping strips. Corresponding strips are aligned using d ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
This paper introduces a new and original algorithm for optical flow detection. It is based on an iterative search for a displacement field that minimizes the L 1 or L 2 distance between two images. Both images are sliced into parallel and overlapping strips. Corresponding strips are aligned using dynamic programming exactly as 2D representations of speech signal are with the DTW algorithm. Two passes are performed using orthogonal slicing directions. This process is iterated in a pyramidal fashion by reducing the spacing and width of the strips. This algorithm provides a very high quality matching for calibrated patterns as well as for human visual sensation. The results appears to be at least as good as those obtained with classical optical flow detection methods. 1. INTRODUCTION Optical flow detection is a very essential and generic procedure that needs to be implemented in computer vision systems. It is necessary in a wide range of applications such as: image matching for stereo...
Image Matching using Dynamic Programming: Application to Stereovision and Image Interpolation
 Image Communication
, 1996
"... This paper presents an original algorithm called the \Orthogonal Algorithm " for image matching using dynamic programming and experimental results from its application to stereovision and image interpolation. The algorithm provides a dense, continuous and di erentiable eld of bidimensional displacem ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
This paper presents an original algorithm called the \Orthogonal Algorithm " for image matching using dynamic programming and experimental results from its application to stereovision and image interpolation. The algorithm provides a dense, continuous and di erentiable eld of bidimensional displacements like classical optical ow detection algorithms. It is based on an iterative search for a displacement eld that minimizes the L1 or L2 distance between two images. Both images are sliced into parallel and overlapping strips. Corresponding strips are aligned using dynamic programming exactly as 2D representations of speech signal are with the DTW algorithm. Two passes are performed using orthogonal slicing directions. This process is iterated in a pyramidal fashion while reducing the spacing and width of the strips. Very good results have been obtained for stereovision and image interpolation. 1.
Navigation by Tracking Vanishing Points
, 1989
"... Many indoor scenes, like offices and corridors, can be modelled as block worlds. 3D visual infomation can be extracted from images of these scenes by using vanishing points, which are the points of the image plane where parallel lines in space appear to intersect. In this paper, it is shown that a p ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Many indoor scenes, like offices and corridors, can be modelled as block worlds. 3D visual infomation can be extracted from images of these scenes by using vanishing points, which are the points of the image plane where parallel lines in space appear to intersect. In this paper, it is shown that a priori information on the mutual direction of straight lines in space allows the extraction of vanishing points from images of indoor scenes even in the presence of curved lines. Experiments on real images are presented in which a simple method based upon the location of vanishing points is used to recover the rotational component of robot motion. The recovery of reliable visual information from images taken by one or more cameras mounted on a moving robot can be used to control passive navigation of the robot itself. Along with the reconstruction and interpretation of the 3D environment, methods for motion analysis (Fenmena & Thompson, 1979; Haralik & Lee, 1983; Hildreth, 1984; Horn & Schunc...
Bandpass Optical Flow for Tagged MR Imaging
, 1997
"... MR tagging has shown great promise for detailed noninvasive cardiac motion imaging. We consider here the use of lowfrequency tags coupled with gradientbased optical flow estimation to compute cardiac motion. A multiple constraint optical flow method for tagged MRI is formulated by exploiting the Fo ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
MR tagging has shown great promise for detailed noninvasive cardiac motion imaging. We consider here the use of lowfrequency tags coupled with gradientbased optical flow estimation to compute cardiac motion. A multiple constraint optical flow method for tagged MRI is formulated by exploiting the Fourier content of the tagged images. The method is validated on simulated tagged data. Keywords optical flow, motion estimation, cardiac tagging. 1. INTRODUCTION Myocardial motion analysis is playing an increasingly important role in the diagnosis of abnormal heart function. MR tagging methods [1] have shown great promise for detailed noninvasive cardiac motion imaging. Most tagging methods use line or grid tags to yield crisp, but sparse, features which are ideally suited for featurebased motion analyses [27]. Our research focuses on the use of lowfrequency tag patterns, wellmodeled as sinusoidal brightness patterns, coupled with gradientbased optical flow (OF) motion estimation t...
Computation of Optical Flow Using Basis Functions
 IEEE Trans. Image Process
, 1997
"... The issues governing the computation of optical flow in image sequences are addressed in this paper. The trade off between accuracy versus computation cost is shown to be dependent on the redundancy of the image representation. This dependency is highlighted by reformulating Horn's algorithm making ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The issues governing the computation of optical flow in image sequences are addressed in this paper. The trade off between accuracy versus computation cost is shown to be dependent on the redundancy of the image representation. This dependency is highlighted by reformulating Horn's algorithm making explicit use of the approximations to the continuous basis functions underlying the discrete representation. The computation cost of estimating optical flow, for a fixed error tolerance, is shown to be minimum for images resampled at twice the Nyquist rate. The issues of derivative calculation and multiresolution representation are also briefly discussed in terms of basis functions and information encoding. A multiresolution basis function formulation of Horn's algorithm is shown to lead to large improvements in dealing with high frequencies and large displacements. Keywords basis functions, multiresolution, optical flow, motion estimation, derivative estimation, oversampling. I. Introd...