Results 1  10
of
60
Determining Optical Flow
 ARTIFICIAL INTELLIGENCE
, 1981
"... Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent veloc ..."
Abstract

Cited by 1747 (7 self)
 Add to MetaCart
Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. An iterative implementation is shown which successfully computes the optical flow for a number of synthetic image sequences. The algorithm is robust in that it can handle image sequences that are quantized rather coarsely in space and time. It is also insensitive to quantization of brightness levels and additive noise. Examples are included where the assumption of smoothness is violated at singular points or along lines in the image.
Epipolarplane image analysis: An approach to determining structure from motion
 Intern..1. Computer Vision
, 1987
"... We present a technique for building a threedimensional description of a static scene from a dense sequence of images. These images are taken in such rapid succession that they form a solid block of data in which the temporal continuity from image to image is approximately equal to the spatial conti ..."
Abstract

Cited by 210 (3 self)
 Add to MetaCart
We present a technique for building a threedimensional description of a static scene from a dense sequence of images. These images are taken in such rapid succession that they form a solid block of data in which the temporal continuity from image to image is approximately equal to the spatial continuity in an individual image. The technique utilizes knowledge of the camera motion to form and analyze slices of this solid. These slices directly encode not only the threedimensional positions of objects, but also such spatiotemporal events as the occlusion of one object by another. For straightline camera motions, these slices have a simple linear structure that makes them easier to analyze. The analysis computes the threedimensional positions of object features, marks occlusion boundaries on the objects, and builds a threedimensional map of "free space. " In our article, we first describe the application of this technique to a simple camera motion, and then show how projective duality is used to extend the analysis to a wider class of camera motions and object types that include curved and moving objects. 1
Recognizing Facial Expressions in Image Sequences Using Local Parameterized Models of Image Motion
 International Journal of Computer Vision
, 1997
"... This paper explores the use of local parametrized models of image motion for recovering and recognizing the nonrigid and articulated motion of human faces. Parametric flow models (for example affine) are popular for estimating motion in rigid scenes. We observe that within local regions in space an ..."
Abstract

Cited by 156 (11 self)
 Add to MetaCart
This paper explores the use of local parametrized models of image motion for recovering and recognizing the nonrigid and articulated motion of human faces. Parametric flow models (for example affine) are popular for estimating motion in rigid scenes. We observe that within local regions in space and time, such models not only accurately model nonrigid facial motions but also provide a concise description of the motion in terms of a small number of parameters. These parameters are intuitively related to the motion of facial features during facial expressions and we show how expressions such as anger, happiness, surprise, fear, disgust, and sadness can be recognized from the local parametric motions in the presence of significant head motion. The motion tracking and expression recognition approach performed with high accuracy in extensive laboratory experiments involving 40 subjects as well as in television and movie sequences.
Maximizing Rigidity: The Incremental Recovery Of 3D Structure From Rigid And . . .
 Perception
, 1983
"... The human visual system can extract 3D shape information of unfamiliar moving objects from their projected transformations. Computational studies of this capacity have established that 3D shape, can be extracted correctly from a brief presentation, provided that the moving objects are rigid. The ..."
Abstract

Cited by 77 (1 self)
 Add to MetaCart
The human visual system can extract 3D shape information of unfamiliar moving objects from their projected transformations. Computational studies of this capacity have established that 3D shape, can be extracted correctly from a brief presentation, provided that the moving objects are rigid. The human visual system requires a longer temporal extension, but it can cope, however, with considerable deviations from rigidity. It is shown how the 3D structure of rigid and nonrigid objects can be recovered by maintaining an internal model of the viewed object and modifying it at each instant by the minimal nonrigid change that is sufficient to account for the observed transformation. The results of applying this incremental rigidity scheme to rigid and nonrigid objects in motion are described and compared with human perceptions.
Computing differential properties of 3D shapes from stereoscopic images without 3D models
, 1994
"... We are considering the problem of recovering the threedimensional geometry of a scene from binoculor stereo disparity. Once a dense disparity map has been computed from a stereo pair of images, one often needs to calculate some local diferential properties of the cowesponding 30 surface such as ..."
Abstract

Cited by 72 (9 self)
 Add to MetaCart
We are considering the problem of recovering the threedimensional geometry of a scene from binoculor stereo disparity. Once a dense disparity map has been computed from a stereo pair of images, one often needs to calculate some local diferential properties of the cowesponding 30 surface such as orientation or curvatures. The wual approach is to build a 30 reconstruction of the surface(s) from which all shape properties will then be derived without ever going back to the original images. In this paper, we depart from this paradigm and propose to w e the images directly to compute the shape properties. We thus propose a new method extending the classical cowelation method to estimate accurately both the disparity and its derivatives directly from the image data. We then relate those derivatives to diferential properties of the surface such as orientation and curvatures.
Driving Saccade to Pursuit using Image Motion
 International Journal of Computer Vision
, 1995
"... . Within the context of active vision, scant attention has been paid to the execution of motion saccades  rapid readjustments of the direction of gaze to attend to moving objects. In this paper we first develop a methodology for, and give realtime demonstrations of, the use of motion detection ..."
Abstract

Cited by 60 (7 self)
 Add to MetaCart
. Within the context of active vision, scant attention has been paid to the execution of motion saccades  rapid readjustments of the direction of gaze to attend to moving objects. In this paper we first develop a methodology for, and give realtime demonstrations of, the use of motion detection and segmentation processes to initiate "capture saccades" towards a moving object. The saccade is driven by both position and velocity of the moving target under the assumption of constant target velocity, using prediction to overcome the delay introduced by visual processing. We next demonstrate the use of a first order approximation to the segmented motion field to compute bounds on the timetocontact in the presence of looming motion. If the bound falls below a safe limit, a "panic saccade" is fired, moving the camera away from the approaching object. We then describe the use of image motion to realize smooth pursuit, tracking using velocity information alone, where the camera is moved...
A Tensor Framework for Multidimensional Signal Processing
 Linkoping University, Sweden
, 1994
"... ii About the cover The figure on the cover shows a visualization of a symmetric tensor in three dimensions, G = λ1ê1ê T 1 + λ2ê2ê T 2 + λ3ê3ê T 3 The object in the figure is the sum of a spear, a plate and a sphere. The spear describes the principal direction of the tensor λ1ê1ê T 1, where the lengt ..."
Abstract

Cited by 56 (8 self)
 Add to MetaCart
ii About the cover The figure on the cover shows a visualization of a symmetric tensor in three dimensions, G = λ1ê1ê T 1 + λ2ê2ê T 2 + λ3ê3ê T 3 The object in the figure is the sum of a spear, a plate and a sphere. The spear describes the principal direction of the tensor λ1ê1ê T 1, where the length is proportional to the largest eigenvalue, λ1. The plate describes the plane spanned by the eigenvectors corresponding to the two largest eigenvalues, λ2(ê1ê T 1 + ê2ê T 2). The sphere, with a radius proportional to the smallest eigenvalue, shows how isotropic the tensor is, λ3(ê1ê T 1 + ê2ê T 2 + ê3ê T 3). The visualization is done using AVS [WWW94]. I am very grateful to Johan Wiklund for implementing the tensor viewer module used. This thesis deals with filtering of multidimensional signals. A large part of the thesis is devoted to a novel filtering method termed “Normalized convolution”. The method performs local expansion of a signal in a chosen filter basis which
Direction of Heading from Image Deformations
, 1993
"... We propose a method to compute the direction of heading from the differential changes in the angles between the projection rays of pairs of point features. These angles, the image deformations, do not depend on viewer rotation, so the key problem of separating the effects of rotation from those of t ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
We propose a method to compute the direction of heading from the differential changes in the angles between the projection rays of pairs of point features. These angles, the image deformations, do not depend on viewer rotation, so the key problem of separating the effects of rotation from those of translation is solved at the input. Experiments show both the feasibility of the method on real images and the advantages of using deformations rather than optical flow. 1 Introduction As we walk down a hallway, the moving images on our retinas convey enough information to determine our direction of heading. Several researchers have investigated how this direction could be computed, either in the human visual system or by a computer processing images from a moving camera. The main difficulty of this computation is to separate the effects of viewer rotation from those of viewer translation. In fact, with only translation the task would be quite simple: features in the image move toward or awa...
The Intrinsic Structure of Optic Flow Incorporating Measurement Duality
 International Journal of Computer Vision
, 1997
"... The purpose of this report 1 is to define optic flow for scalar and density images without using a priori knowledge other than its defining conservation principle, and to incorporate measurement duality, notably the scalespace paradigm. It is argued that the design of optic flow based applicati ..."
Abstract

Cited by 20 (13 self)
 Add to MetaCart
The purpose of this report 1 is to define optic flow for scalar and density images without using a priori knowledge other than its defining conservation principle, and to incorporate measurement duality, notably the scalespace paradigm. It is argued that the design of optic flow based applications may benefit from a manifest separation between factual image structure on the one hand, and goalspecific details and hypotheses about image flow formation on the other. The approach is based on a physical symmetry principle known as gauge invariance. Dataindependent models can be incorporated by means of admissible gauge conditions, each of which may single out a distinct solution, but all of which must be compatible with the evidence supported by the image data. The theory is illustrated by examples and verified by simulations, and performance is compared to several techniques reported in the literature. 1 Introduction The conventional "spacetime" representation of a movie as...
Computational approaches to image understanding
 Computing Surveys
, 1982
"... Recent theoretmal developments in Image Understandmg are surveyed. Among the issues discussed are edge finding, region finding, texture, shape from shading, shape from texture, shape from contour, and the representations of surfaces and objects. Much of the work described was developed in the DARPA ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
Recent theoretmal developments in Image Understandmg are surveyed. Among the issues discussed are edge finding, region finding, texture, shape from shading, shape from texture, shape from contour, and the representations of surfaces and objects. Much of the work described was developed in the DARPA Image Understanding project.