Results 1 
5 of
5
Hierarchical Structure and Nonrigid Motion Recovery from 2D Monocular Views
, 2000
"... Inferring both 3D structure and motion of nonrigid objects from monocular images is an important problem in computational vision. The challenges stem not only from the absence of point correspondences but also from the structure ambiguity. In this paper, a hierarchical method which integrates both l ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Inferring both 3D structure and motion of nonrigid objects from monocular images is an important problem in computational vision. The challenges stem not only from the absence of point correspondences but also from the structure ambiguity. In this paper, a hierarchical method which integrates both local patch analysis and global shape descriptions is devised to solve the dual problem of structure and nonrigid motion recovery by using an elastic geometric model  extended superquadrics. The nonrigid object of interest is segmented into many small areas and local analysis is performed to recover small details for each small area, assuming that each small area is undergoing similar nonrigid motion. Then, a recursive algorithm is proposed to guide and regularize local analysis with global information by using an appropriate global shape model. This localglobal hierarchy enables us to capture both local and global deformations accurately and robustly. Experimental results on both simulation and real data arepresented to validate and evaluate the effectiveness and robustness of the proposed approach.
Optical Flow and Deformable Objects
 Proceeding of 5th ICCV
, 1995
"... When a plane undergoes a deformation that can be represented by a planar linear vector field, the projected vector field on the image plane of an optical device is at most quadratic. This 2D motion field has one singular point, with eigenvalues identical to those of the singular point describing the ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
When a plane undergoes a deformation that can be represented by a planar linear vector field, the projected vector field on the image plane of an optical device is at most quadratic. This 2D motion field has one singular point, with eigenvalues identical to those of the singular point describing the deformation. As a consequence, the nature of the singular point of the deformation is a projective invariant. When the plane moves and experiences a linear deformation at the same time, the associated 2D motion field is still quadratic with at most 3 singular points. In the case of a normal rototranslation, i.e. when the angular velocity is normal to the plane, and of a linear deformation, the 2D motion field has at most one singular point and substantial information on the rigid motion and on the deformation can be recovered from it. Experiments with simulated deformations and real deformable objects show that the proposed analysis can provide accurate results and information on more gener...
Dynamic Motion Analysis Using Wavelet Flow Surface Images
 Pattern Recognition Letters
, 1999
"... We have developed a motion analysis method that combines the wavelet transform with the flow surface image technique. In dynamic navigation environments, whenever objects move, the projections of the objects onto the image plane also move. These projections build over time spatiotemporal surfaces of ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We have developed a motion analysis method that combines the wavelet transform with the flow surface image technique. In dynamic navigation environments, whenever objects move, the projections of the objects onto the image plane also move. These projections build over time spatiotemporal surfaces of their movement and volumes created by the surfaces. This paper presents a new method for the interpretation of optical flow for moving objects from a sequence of images. Flow surface images of the moving objects are created within the waveletderived space, chosen from seven different directionally sensitive detail images using the 3D wavelet decomposition. The motion estimation algorithm concentrates on the integration of information from the flow surface images, followed by a quadratic patch parameterization and determination of flow paths of the end points of edges on the flow surface images. The results of two experimental studies with an object exhibiting outofplane translation and ...
D.L.: Image interpolation technique for measurement of egomotion in 6 degrees of freedom
 Journal of the Optical Society of America A: Optics, Image Science, and Vision
, 1997
"... The motion of an imaging device relative to the environment can, theoretically, be determined from the spatiotemporal intensity changes induced on the image plane of the device. We present a noniterative method for computing the six parameters of egomotion (three translatory and three rotational) fr ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The motion of an imaging device relative to the environment can, theoretically, be determined from the spatiotemporal intensity changes induced on the image plane of the device. We present a noniterative method for computing the six parameters of egomotion (three translatory and three rotational) from this visual input. The scheme is initially tested in a raytraced environment to show proof of concept and to explore factors that influence its performance. We then demonstrate its performance on a multilobed camera, which is moved by arbitrary amounts in space. We also discuss and describe some practical implementations. © 1997 Optical Society of America [S07403232(97)008120] 1.
An IntensityBased Method for the 3D Motion and Structure Estimation from Binocular Image Sequences
"... This paper presents a new algorithm for estimating the 3D motion and structure from stereo image sequences. In order to overcome the inherent ambiguities of the motion and structure estimation from a monocular image sequence and to improve the performance of the binocular stereo methods, both th ..."
Abstract
 Add to MetaCart
This paper presents a new algorithm for estimating the 3D motion and structure from stereo image sequences. In order to overcome the inherent ambiguities of the motion and structure estimation from a monocular image sequence and to improve the performance of the binocular stereo methods, both the intensitybased methods are directly integrated. Being different from other methods computing the structure from stereo and motion, this method does not need to match separate monocular optical flows, so that the high complexity of the algorithm is avoided. Instead of simplified models that are often unreasonable, new models for 3D piecewise smooth structure and occlusion are put forward for Bayesian estimation. The experiments show that the algorithm is effective and robust to improve the 3D motion and structure estimation.