Results 1  10
of
13
The Fundamental matrix: theory, algorithms, and stability analysis
 International Journal of Computer Vision
, 1995
"... In this paper we analyze in some detail the geometry of a pair of cameras, i.e. a stereo rig. Contrarily to what has been done in the past and is still done currently, for example in stereo or motion analysis, we do not assume that the intrinsic parameters of the cameras are known (coordinates of th ..."
Abstract

Cited by 233 (14 self)
 Add to MetaCart
In this paper we analyze in some detail the geometry of a pair of cameras, i.e. a stereo rig. Contrarily to what has been done in the past and is still done currently, for example in stereo or motion analysis, we do not assume that the intrinsic parameters of the cameras are known (coordinates of the principal points, pixels aspect ratio and focal lengths). This is important for two reasons. First, it is more realistic in applications where these parameters may vary according to the task (active vision). Second, the general case considered here, captures all the relevant information that is necessary for establishing correspondences between two pairs of images. This information is fundamentally projective and is hidden in a confusing manner in the commonly used formalism of the Essential matrix introduced by LonguetHiggins [40]. This paper clarifies the projective nature of the correspondence problem in stereo and shows that the epipolar geometry can be summarized in one 3 \Theta 3 ma...
Passive navigation
 Computer Vision, Graphics, and Image Processing
, 1983
"... A method is proposed for determining the motion of a body relative to a fixed environment using the changing image seen by a camera attached to the body. The optical flow in the image plane is the input, while the instantaneous rotation and translation of the body are the output. If optical flow cou ..."
Abstract

Cited by 168 (7 self)
 Add to MetaCart
A method is proposed for determining the motion of a body relative to a fixed environment using the changing image seen by a camera attached to the body. The optical flow in the image plane is the input, while the instantaneous rotation and translation of the body are the output. If optical flow could be determined precisely, it would only have to be known at a few places to compute the parameters of the motion. In practice, however, the measured optical flow will be somewhat inaccurate. It is therefore advantageous to consider methods which use as much of the available information as possible. We employ a leastsquares approach which minimizes some measure of the discrepancy between the measured flow and that predicted from the computed motion parameters. Several different error norms are investigated. In general, our algorithm leads to a system of nonlinear equations from which the motion parameters may be computed numerically. However, in the special cases where the motion of the camera is purely translational or purely rotational, use of the appropriate norm leads to a system of equations from which these parameters can be determined in closed form. 1.
Comparison of Approaches to Egomotion Computation
 In CVPR
, 1996
"... We evaluated six algorithms for computing egomotion from image velocities. We established benchmarks for quantifying bias and sensitivity to noise, and for quantifying the convergence properties of those algorithms that require numerical search. Our simulation results reveal some interesting and sur ..."
Abstract

Cited by 59 (0 self)
 Add to MetaCart
We evaluated six algorithms for computing egomotion from image velocities. We established benchmarks for quantifying bias and sensitivity to noise, and for quantifying the convergence properties of those algorithms that require numerical search. Our simulation results reveal some interesting and surprising results. First, it is often written in the literature that the egomotion problem is difficult because translation (e.g., along the Xaxis) and rotation (e.g., about the Yaxis) produce similar image velocities. We found, to the contrary, that the bias and sensitivity of our six algorithms are totally invariant with respect to the axis of rotation. Second, it is also believed by some that fixating helps to make the egomotion problem easier. We found, to the contrary, that fixating does not help when the noise is independent of the image velocities. Fixation does help if the noise is proportional to speed, but this is only for the trivial reason that the speeds are slower under fixatio...
Motion fields are hardly ever ambiguous
 International Journal of Computer Vision
, 1987
"... There has been much concern with ambiguity in the recovery of motion and structure from timevaryin g images. I show here that the class of surfaces leading to ambiguous motion fields is extremel y restrictedonly certain hyperholoids of one sheet (and some degenerate forms) qualify. Furthermore, th ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
There has been much concern with ambiguity in the recovery of motion and structure from timevaryin g images. I show here that the class of surfaces leading to ambiguous motion fields is extremel y restrictedonly certain hyperholoids of one sheet (and some degenerate forms) qualify. Furthermore, the viewer must be on the surface for it to lead to a potentially ambiguous motion field. Thus the motion field over an appreciable image region almost always uniquely defines the instantaneous translationa l and rotational velocities, as well as the shape of the surface (up to a scale factor).
esearch for this article was conducted while the author was on leave at the Department of Electrical Engineering, University of Hawaii at Manoa, Honolulu. Hawaii 96822
Robust Egomotion Estimation from Affine Motion Parallax
 In European Conference on Computer Vision
, 1994
"... A condensed version of this paper will be presented at ECCV'94 ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
A condensed version of this paper will be presented at ECCV'94
Active control of zoom for computer vision
, 2002
"... Using zoom lenses in a computer vision system affects many aspects of the processing in the path from image formation to structure recovery. This thesis is concerned with understanding and addressing the particular issues which arise when wishing to control zoom in an active vision system — one able ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Using zoom lenses in a computer vision system affects many aspects of the processing in the path from image formation to structure recovery. This thesis is concerned with understanding and addressing the particular issues which arise when wishing to control zoom in an active vision system — one able to fixate upon and track objects in the scene. The optical properties of zoom lenses interact with the imaging process in a number of ways. The first part of this work begins by confirming that the pinhole camera model is nonetheless valid for the cameras to be used. Then, using geometric arguments, it is shown how zoomvarying lens distortion adversely affects camera autocalibration techniques which rely on purely rotational motion. Whilst pincushion distortion is tolerable, it is shown that barrelling distortion causes algorithm failure. The breakdown point is predicted, then verified using synthetic experiments. Suggestions for automatic recovery of the distortion parameters are given. The lowest level of the visual processing involves detecting and matching image features before robust segmentation and motion estimation. Achieving robustness comes at high computational cost, and the second part of this work addresses some of the theoretical and computational issues in Torr and Zisserman’s
Ambiguities of a motion field
 Laboratory, Massachusetts Institute of Technology
, 1987
"... In this paper we study the conditions under which a perspective motion field can have multiple interpretations, and present analytical expressions for the relationship among these interpretations. It is shown that, in most cases, the ambiguity in the interpretation of a motion field can be resolved ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper we study the conditions under which a perspective motion field can have multiple interpretations, and present analytical expressions for the relationship among these interpretations. It is shown that, in most cases, the ambiguity in the interpretation of a motion field can be resolved by imposing the physical constraint that depth is positive over the image region onto which the surface projects.
Extracting Planar Kinematic Models Using Interactive Perception
"... Abstract — Interactive perception augments the process of perception with physical interactions. By adding interactions into the perceptual process, manipulating the environment becomes part of the effort to learn taskrelevant information, leading to more reliable task execution. Interactions inclu ..."
Abstract
 Add to MetaCart
Abstract — Interactive perception augments the process of perception with physical interactions. By adding interactions into the perceptual process, manipulating the environment becomes part of the effort to learn taskrelevant information, leading to more reliable task execution. Interactions include obstruction removal, object repositioning, and object manipulation. In this paper, we show how to extract kinematic properties from novel objects. Many objects in human environments, such as doors, drawers, and hand tools, contain inherent kinematic degrees of freedom. Knowledge of these degrees of freedom is required to use the objects in their intended manner. We demonstrate how a simple algorithm enables the construction of kinematic models for such objects, resulting in knowledge necessary for the correct operation of those objects. The simplicity of the framework and its effectiveness, demonstrated in our experimental results, indicate that interactive perception is a promising perceptual paradigm for autonomous mobile manipulation. I.
Interpretation of Image Flow: Rigid Curved Surfaces in Motion
, 1988
"... A new method is described for interpreting image flow (or optical flow) in a small field of view produced by a rigidly moving curved surface. The equations relating the shape and motion of the surface to the image flow are formulated. These equations are solved to obtain explicit analytic expression ..."
Abstract
 Add to MetaCart
A new method is described for interpreting image flow (or optical flow) in a small field of view produced by a rigidly moving curved surface. The equations relating the shape and motion of the surface to the image flow are formulated. These equations are solved to obtain explicit analytic expressions for the motion, orientation and curvatures of the surface in terms of the spatial derivatives (up to second order) of the image flow. We state and prove some new theoretical results concerning the existence of multiple interpretations. Numerical examples are given for some interesting cases where multiple solutions exist. The solution method described here is simpler and more direct than previous methods. The method and the representation described here are part of a unified approach for the interpretation of image motion in a variety of cases (e.g.: planar/curved surfaces, constant/accelerated motion, etc.). Thus the representation and the method of analysis adopted here have some advanta...
.ps12.vs16.nrpp12 Interpretation of Image Flow: Rigid Curved Surfaces in Motion
"... A new method is described for interpreting image flow (or optical flow) in a small field of view produced by a rigidly moving curved surface. The equations relating the shape and motion of the surface to the image flow are formulated. These equations are solved to obtain explicit analytic expression ..."
Abstract
 Add to MetaCart
A new method is described for interpreting image flow (or optical flow) in a small field of view produced by a rigidly moving curved surface. The equations relating the shape and motion of the surface to the image flow are formulated. These equations are solved to obtain explicit analytic expressions for the motion, orientation and curvatures of the surface in terms of the spatial derivatives (up to second order) of the image flow. We state and prove some new theoretical results concerning the existence of multiple interpretations. Numerical examples are given for some interesting cases where multiple solutions exist. The solution method described here is simpler and more direct than previous methods. The method and the representation described here are part of a unified approach for the interpretation of image motion in a variety of cases (e.g.: planar/curved surfaces, constant/accelerated motion, etc.). Thus the representation and the method of analysis adopted here have some advantages in comparison with previous approaches