Results 1  10
of
23
Aloimonos, Ambiguity in structure from motion: Sphere versus plane
 Internat. J. Comput. Vision
, 1998
"... Abstract. If 3D rigid motion can be correctly estimated from image sequences, the structure of the scene can be correctly derived using the equations for image formation. However, an error in the estimation of 3D motion will result in the computation of a distorted version of the scene structure. Of ..."
Abstract

Cited by 24 (6 self)
 Add to MetaCart
(Show Context)
Abstract. If 3D rigid motion can be correctly estimated from image sequences, the structure of the scene can be correctly derived using the equations for image formation. However, an error in the estimation of 3D motion will result in the computation of a distorted version of the scene structure. Of computational interest are these regions in space where the distortions are such that the depths become negative, because in order for the scene to be visible it has to lie in front of the image, and thus the corresponding depth estimates have to be positive. The stability analysis for the structure from motion problem presented in this paper investigates the optimal relationship between the errors in the estimated translational and rotational parameters of a rigid motion that results in the estimation of a minimum number of negative depth values. The input used is the value of the flow along some direction, which is more general than optic flow or correspondence. For a planar retina it is shown that the optimal configuration is achieved when the projections of the translational and rotational errors on the image plane are perpendicular. Furthermore, the projection of the actual and the estimated translation lie on a line through the center. For a spherical retina, given a rotational error, the optimal translation is the correct one; given a translational error, the optimal rotational error depends both in direction and value on the actual and estimated translation as well as the scene in view. The proofs, besides illuminating the confounding of translation and rotation in structure from motion, have an important application to ecological optics. The same analysis provides a computational explanation of why it is
Directions of Motion Fields Are Hardly Ever Ambiguous
, 1998
"... If instead of the full motion field, we consider only the direction of the motion field due to a rigid motion, what can we say about the threedimensional motion information contained in it? This paper provides a geometric analysis of this question based solely on the constraint that the depth of th ..."
Abstract

Cited by 24 (6 self)
 Add to MetaCart
If instead of the full motion field, we consider only the direction of the motion field due to a rigid motion, what can we say about the threedimensional motion information contained in it? This paper provides a geometric analysis of this question based solely on the constraint that the depth of the surfaces in view is positive. It is shown that, considering as the imaging surface the whole sphere, independently of the scene in view, two different rigid motions cannot give rise to the same directional motion field. If we restrict the image to half of a sphere (or an infinitely large image plane) two different rigid motions with instantaneous translational and rotational velocities (t 1 ; ! 1 ) and (t 2 ; ! 2 ) cannot give rise to the same directional motion field unless the plane through t 1 and t 2 is perpendicular to the plane through ! 1 and ! 2 (i.e., (t 1 \Theta t 2 ) \Delta (! 1 \Theta ! 2 ) = 0). In addition, in order to give practical significance to these uniqueness results ...
Extracting Structure from Optical Flow Using the Fast Error Search Technique
 International Journal of Computer Vision
, 1998
"... In this paper, we present a robust and computationally efficient technique for estimating the focus of expansion (FOE) of an optical flow field, using fast partial search. For each candidate location on a discrete sampling of the image area, we generate a linear system of equations for determining ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we present a robust and computationally efficient technique for estimating the focus of expansion (FOE) of an optical flow field, using fast partial search. For each candidate location on a discrete sampling of the image area, we generate a linear system of equations for determining the remaining unknowns, viz. rotation and inverse depth. We compute the least squares error of the system without actually solving the equations, to generate an error surface that describes the goodness of fit across the hypotheses. Using Fourier techniques, we prove that given an N \Theta N flow field, the FOE can be estimated in O(N 2 log N) operations. Since the resulting system is linear, bounded perturbations in the data lead to bounded errors. We support the theoretical development and proof of our algorithm with experiments on synthetic and real data. Through a series of experiments on synthetic data, we prove the correctness, robustness and operating envelope of our algorithm. We d...
Families of stationary patterns producing illusory movement: Insights into the visual system
 Proceedings of Royal Society of London. B. Biological Sciences
, 1997
"... insights into the visual system ..."
(Show Context)
A Biologically Inspired Modular VLSI System for Visual Measurement of SelfMotion
 IEEE SENSORS JOURNAL
, 2002
"... We introduce a biologically inspired computational architecture for smallfield detection and widefield spatial integration of visual motion based on the general organizing principles of visual motion processing common to organisms from insects to primates. This highly parallel architecture begins ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
We introduce a biologically inspired computational architecture for smallfield detection and widefield spatial integration of visual motion based on the general organizing principles of visual motion processing common to organisms from insects to primates. This highly parallel architecture begins with twodimensional (2D) image transduction and signal conditioning, performs smallfield motion detection with a number of parallel motion arrays, and then spatially integrates the smallfield motion units to synthesize units sensitive to complex widefield patterns of visual motion. We present a theoretical analysis demonstrating the architecture’s potential in discrimination of widefield motion patterns such as those which might be generated by selfmotion. A custom VLSI hardware implementation of this architecture is also described, incorporating both analog and digital circuitry. The individual custom VLSI elements are analyzed and characterized, and systemlevel test results demonstrate the ability of the system to selectively respond to certain motion patterns, such as those that might be encountered in selfmotion, at the exclusion of others.
Parametric manifold of an object under different viewing directions
 In ECCV
, 2012
"... Abstract. The appearance of a 3D object depends on both the viewing directions and illumination conditions. It is proven that all npixel images of a convex object with Lambertian surface under variable lighting from infinity form a convex polyhedral cone (called illumination cone) in ndimensiona ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The appearance of a 3D object depends on both the viewing directions and illumination conditions. It is proven that all npixel images of a convex object with Lambertian surface under variable lighting from infinity form a convex polyhedral cone (called illumination cone) in ndimensional space. This paper tries to answer the other half of the question: What is the set of images of an object under all viewing directions? A novel image representation is proposed, which transforms any npixel image of a 3D object to a vector in a 2ndimensional pose space. In such a pose space, we prove that the transformed images of a 3D object under all viewing directions form a parametric manifold in a 6dimensional linear subspace. With indepth rotations along a single axis in particular, this manifold is an ellipse. Furthermore, we show that this parametric pose manifold of a convex object can be estimated from a few images in different poses and used to predict object’s appearances under unseen viewing directions. These results immediately suggest a number of approaches to object recognition, scene detection, and 3D modelling. Experiments on both synthetic data and real images were reported, which demonstrates the validity of the proposed representation.
3D Motion and Shape Representations in Visual Servo Control
 Int. J. of Robotics Research
, 1995
"... The study of visual navigation problems requires the integration of visual processes with motor control processes. Most essential in approaching this integration is the study of appropriate spatiotemporal representations which the system computes from the imagery and which serve as interfaces to al ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
The study of visual navigation problems requires the integration of visual processes with motor control processes. Most essential in approaching this integration is the study of appropriate spatiotemporal representations which the system computes from the imagery and which serve as interfaces to all cognitive and motor activities. Since representations resulting from exact quantitative reconstruction have turned out to be very hard to obtain, we argue here for the necessity of of representations which can be computed easily, reliably and in real time and which recover only the information about the 3D world which is really needed in order to solve the navigational problems at hand. In this paper we introduce a number of such representations capturing aspects of 3D motion and scene structure which are used for the solution of navigational problems implemented in visual servo systems. This research is supported by the National Science Foundation under Grant IRI9057934, the Office of N...
An Integrated Vision Sensor for the Computation of Optical Flow Singular Points
 in Advances in HIGGINS, DEUTSCHMANN AND KOCH: PULSEBASED 2D MOTION SENSORS 10 Neural Information Processing Systems
, 1999
"... A robust, integrative algorithm is presented for computing the position of the focus of expansion or axis of rotation (the singular point) in optical flow fields such as those generated by selfmotion. Measurements are shown of a fully parallel CMOS analog VLSI motion sensor array which computes the ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
A robust, integrative algorithm is presented for computing the position of the focus of expansion or axis of rotation (the singular point) in optical flow fields such as those generated by selfmotion. Measurements are shown of a fully parallel CMOS analog VLSI motion sensor array which computes the direction of local motion (sign of optical flow) at each pixel and can directly implement this algorithm. The flow field singular point is computed in real time with a power consumption of less than 2 mW . Computation of the singular point for more general flow fields requires measures of field expansion and rotation, which it is shown can also be computed in realtime hardware, again using only the sign of the optical flow field. These measures, along with the location of the singular point, provide robust realtime selfmotion information for the visual guidance of a moving platform such as a robot. 1 INTRODUCTION Visually guided navigation of autonomous vehicles requires robust measures...
What is Computed by Structure from Motion Algorithms?
 In Proc. European Conference on Computer Vision
, 1997
"... In the literature we find two classes of algorithms which, on the basis of two views of a scene, recover the rigid transformation between the views and subsequently the structure of the scene. The first class contains techniques which require knowledge of the correspondence or the motion field betwe ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
In the literature we find two classes of algorithms which, on the basis of two views of a scene, recover the rigid transformation between the views and subsequently the structure of the scene. The first class contains techniques which require knowledge of the correspondence or the motion field between the images and are based on the epipolar constraint. The second class contains socalled direct algorithms which require knowledge about the value of the flow in one direction only and are based on the positive depth constraint. Algorithms in the first class achieve the solution by minimizing a function representing deviation from the epipolar constraint while direct algorithms find the 3D motion that, when used to estimate depth, produces a minimum number of negative depth values. This paper presents a stability analysis of both classes of algorithms. The formulation is such that it allows comparison of the robustness of algorithms in the two classes as well as within each class. Specifi...
Direct SelfCalibration
, 1997
"... This study investigates the problem of estimating camera calibration parameters from image motion fields induced by a rigidly moving camera with unknown parameters, where the image formation is modeled with a linear pinholecamera model. The equations obtained show the flow to be separated into a co ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
This study investigates the problem of estimating camera calibration parameters from image motion fields induced by a rigidly moving camera with unknown parameters, where the image formation is modeled with a linear pinholecamera model. The equations obtained show the flow to be separated into a component due to the translation and the calibration parameters and a component due to the rotation and the calibration parameters. A set of parameters encoding the latter component is linearly related to the flow, and from these parameters the calibration can be determined. However, as for discrete motion, in general it is not possible to decouple image measurements obtained from only two frames into translational and rotational components. Geometrically, the ambiguity takes the form of a part of the rotational component being parallel to the translational component, and thus the scene can be reconstructed only up to a projective transformation. In general, for full calibration at least four ...