Results 1  10
of
95
Determining Optical Flow
 ARTIFICIAL INTELLIGENCE
, 1981
"... Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent veloc ..."
Abstract

Cited by 2379 (9 self)
 Add to MetaCart
Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. An iterative implementation is shown which successfully computes the optical flow for a number of synthetic image sequences. The algorithm is robust in that it can handle image sequences that are quantized rather coarsely in space and time. It is also insensitive to quantization of brightness levels and additive noise. Examples are included where the assumption of smoothness is violated at singular points or along lines in the image.
Epipolarplane image analysis: An approach to determining structure from motion
 INTERN..1. COMPUTER VISION
, 1987
"... We present a technique for building a threedimensional description of a static scene from a dense sequence of images. These images are taken in such rapid succession that they form a solid block of data in which the temporal continuity from image to image is approximately equal to the spatial conti ..."
Abstract

Cited by 255 (3 self)
 Add to MetaCart
(Show Context)
We present a technique for building a threedimensional description of a static scene from a dense sequence of images. These images are taken in such rapid succession that they form a solid block of data in which the temporal continuity from image to image is approximately equal to the spatial continuity in an individual image. The technique utilizes knowledge of the camera motion to form and analyze slices of this solid. These slices directly encode not only the threedimensional positions of objects, but also such spatiotemporal events as the occlusion of one object by another. For straightline camera motions, these slices have a simple linear structure that makes them easier to analyze. The analysis computes the threedimensional positions of object features, marks occlusion boundaries on the objects, and builds a threedimensional map of "free space." In our article, we first describe the application of this technique to a simple camera motion, and then show how projective duality is used to extend the analysis to a wider class of camera motions and object types that include curved and moving objects.
Recognizing Facial Expressions in Image Sequences Using Local Parameterized Models of Image Motion
 International Journal of Computer Vision
, 1997
"... This paper explores the use of local parametrized models of image motion for recovering and recognizing the nonrigid and articulated motion of human faces. Parametric flow models (for example affine) are popular for estimating motion in rigid scenes. We observe that within local regions in space an ..."
Abstract

Cited by 190 (11 self)
 Add to MetaCart
This paper explores the use of local parametrized models of image motion for recovering and recognizing the nonrigid and articulated motion of human faces. Parametric flow models (for example affine) are popular for estimating motion in rigid scenes. We observe that within local regions in space and time, such models not only accurately model nonrigid facial motions but also provide a concise description of the motion in terms of a small number of parameters. These parameters are intuitively related to the motion of facial features during facial expressions and we show how expressions such as anger, happiness, surprise, fear, disgust, and sadness can be recognized from the local parametric motions in the presence of significant head motion. The motion tracking and expression recognition approach performed with high accuracy in extensive laboratory experiments involving 40 subjects as well as in television and movie sequences.
Maximizing Rigidity: The Incremental Recovery Of 3D Structure From Rigid And . . .
 Perception
, 1983
"... The human visual system can extract 3D shape information of unfamiliar moving objects from their projected transformations. Computational studies of this capacity have established that 3D shape, can be extracted correctly from a brief presentation, provided that the moving objects are rigid. The ..."
Abstract

Cited by 101 (2 self)
 Add to MetaCart
The human visual system can extract 3D shape information of unfamiliar moving objects from their projected transformations. Computational studies of this capacity have established that 3D shape, can be extracted correctly from a brief presentation, provided that the moving objects are rigid. The human visual system requires a longer temporal extension, but it can cope, however, with considerable deviations from rigidity. It is shown how the 3D structure of rigid and nonrigid objects can be recovered by maintaining an internal model of the viewed object and modifying it at each instant by the minimal nonrigid change that is sufficient to account for the observed transformation. The results of applying this incremental rigidity scheme to rigid and nonrigid objects in motion are described and compared with human perceptions.
A Theoretical Framework for Convex Regularizers in PDEBased Computation of Image Motion
, 2000
"... Many differential methods for the recovery of the optic flow field from an image sequence can be expressed in terms of a variational problem where the optic flow minimizes some energy. Typically, these energy functionals consist of two terms: a data term, which requires e.g. that a brightness consta ..."
Abstract

Cited by 99 (25 self)
 Add to MetaCart
(Show Context)
Many differential methods for the recovery of the optic flow field from an image sequence can be expressed in terms of a variational problem where the optic flow minimizes some energy. Typically, these energy functionals consist of two terms: a data term, which requires e.g. that a brightness constancy assumption holds, and a regularizer that encourages global or piecewise smoothness of the flow field. In this paper we present a systematic classification of rotation invariant convex regularizers by exploring their connection to diffusion filters for multichannel images. This taxonomy provides a unifying framework for datadriven and flowdriven, isotropic and anisotropic, as well as spatial and spatiotemporal regularizers. While some of these techniques are classic methods from the literature, others are derived here for the first time. We prove that all these methods are wellposed: they posses a unique solution that depends in a continuous way on the initial data. An interesting structural relation between isotropic and anisotropic flowdriven regularizers is identified, and a design criterion is proposed for constructing anisotropic flowdriven regularizers in a simple and direct way from isotropic ones. Its use is illustrated by several examples.
Computing differential properties of 3D shapes from stereoscopic images without 3D models
, 1994
"... We are considering the problem of recovering the threedimensional geometry of a scene from binoculor stereo disparity. Once a dense disparity map has been computed from a stereo pair of images, one often needs to calculate some local diferential properties of the cowesponding 30 surface such as ..."
Abstract

Cited by 82 (9 self)
 Add to MetaCart
(Show Context)
We are considering the problem of recovering the threedimensional geometry of a scene from binoculor stereo disparity. Once a dense disparity map has been computed from a stereo pair of images, one often needs to calculate some local diferential properties of the cowesponding 30 surface such as orientation or curvatures. The wual approach is to build a 30 reconstruction of the surface(s) from which all shape properties will then be derived without ever going back to the original images. In this paper, we depart from this paradigm and propose to w e the images directly to compute the shape properties. We thus propose a new method extending the classical cowelation method to estimate accurately both the disparity and its derivatives directly from the image data. We then relate those derivatives to diferential properties of the surface such as orientation and curvatures.
A Tensor Framework for Multidimensional Signal Processing
 Linkoping University, Sweden
, 1994
"... ii About the cover The figure on the cover shows a visualization of a symmetric tensor in three dimensions, G = λ1ê1ê T 1 + λ2ê2ê T 2 + λ3ê3ê T 3 The object in the figure is the sum of a spear, a plate and a sphere. The spear describes the principal direction of the tensor λ1ê1ê T 1, where the lengt ..."
Abstract

Cited by 66 (8 self)
 Add to MetaCart
(Show Context)
ii About the cover The figure on the cover shows a visualization of a symmetric tensor in three dimensions, G = λ1ê1ê T 1 + λ2ê2ê T 2 + λ3ê3ê T 3 The object in the figure is the sum of a spear, a plate and a sphere. The spear describes the principal direction of the tensor λ1ê1ê T 1, where the length is proportional to the largest eigenvalue, λ1. The plate describes the plane spanned by the eigenvectors corresponding to the two largest eigenvalues, λ2(ê1ê T 1 + ê2ê T 2). The sphere, with a radius proportional to the smallest eigenvalue, shows how isotropic the tensor is, λ3(ê1ê T 1 + ê2ê T 2 + ê3ê T 3). The visualization is done using AVS [WWW94]. I am very grateful to Johan Wiklund for implementing the tensor viewer module used. This thesis deals with filtering of multidimensional signals. A large part of the thesis is devoted to a novel filtering method termed “Normalized convolution”. The method performs local expansion of a signal in a chosen filter basis which
Driving saccade to pursuit using image motion
 International Journal of Computer Vision
, 1995
"... ..."
Contour evolution, neighborhood deformation and local image flow: Curved surfaces in motion. Tech. rept. in preparation
, 1985
"... In the kinematic analysis of timevarying imagery, where the goal is to recover object surface structure and space motion from image flow, an appropriate representation for the flow field consists of a set of deformation parameters that describe the rate of change of an image neighborhood. In this p ..."
Abstract

Cited by 54 (1 self)
 Add to MetaCart
In the kinematic analysis of timevarying imagery, where the goal is to recover object surface structure and space motion from image flow, an appropriate representation for the flow field consists of a set of deformation parameters that describe the rate of change of an image neighborhood. In this paper we develop methods for extracting these deformation parameters from evolving contours in an image sequence, the image contours being manifestations of surface texture seen in perspective projection. Our results follow directly from the analytic structure of the underlying image flow; no heuristics are imposed. The deformation parameters we seek are actually linear combinations of the Taylor series coefficients (through second derivatives) of the local image flow field. Thus, a byproduct of our approach is a secondorder polynomial approximation to the image flow in the neighborhood of a contour. For curved surfaces this approximation is only locally valid, but for planar surfaces it is globally valid (i.e., it is exact). Our analysis reveals an "aperture problem in the large " in which insufficient contour structure leaves the set of 12 deformation parameters underdetermined. We also assess the sensitivity of our method to the simulated effects of noise in the "normal flow " around contours as well as the angular field of view subtended by contours. The sensitivity analysis is carried out in the context of planar surfaces executing general rigidbody motions in space. Future work will address the additional considerations relevant to curved surface patches. 1.
Direction of Heading from Image Deformations
, 1993
"... We propose a method to compute the direction of heading from the differential changes in the angles between the projection rays of pairs of point features. These angles, the image deformations, do not depend on viewer rotation, so the key problem of separating the effects of rotation from those of t ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
(Show Context)
We propose a method to compute the direction of heading from the differential changes in the angles between the projection rays of pairs of point features. These angles, the image deformations, do not depend on viewer rotation, so the key problem of separating the effects of rotation from those of translation is solved at the input. Experiments show both the feasibility of the method on real images and the advantages of using deformations rather than optical flow. 1 Introduction As we walk down a hallway, the moving images on our retinas convey enough information to determine our direction of heading. Several researchers have investigated how this direction could be computed, either in the human visual system or by a computer processing images from a moving camera. The main difficulty of this computation is to separate the effects of viewer rotation from those of viewer translation. In fact, with only translation the task would be quite simple: features in the image move toward or awa...