Results 1  10
of
108
Determining the Epipolar Geometry and its Uncertainty: A Review
 International Journal of Computer Vision
, 1998
"... Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, an ..."
Abstract

Cited by 320 (7 self)
 Add to MetaCart
Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, and its determination is very important in many applications such as scene modeling and vehicle navigation. This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and its uncertainty. A wellfounded measure is proposed to compare these techniques. Projective reconstruction is also reviewed. The software which we have developed for this review is available on the Internet.
Unified inverse depth parametrization for monocular slam
 In Proceedings of Robotics: Science and Systems
, 2006
"... Abstract—We present a new parametrization for point features within monocular simultaneous localization and mapping (SLAM) that permits efficient and accurate representation of uncertainty during undelayed initialization and beyond, all within the standard extended Kalman filter (EKF). The key conce ..."
Abstract

Cited by 121 (17 self)
 Add to MetaCart
Abstract—We present a new parametrization for point features within monocular simultaneous localization and mapping (SLAM) that permits efficient and accurate representation of uncertainty during undelayed initialization and beyond, all within the standard extended Kalman filter (EKF). The key concept is direct parametrization of the inverse depth of features relative to the camera locations from which they were first viewed, which produces measurement equations with a high degree of
MultiFrame Optical Flow Estimation Using Subspace Constraints
, 1999
"... We show that the set of all ow elds in a sequence of frames imaging a rigid scene resides in a lowdimensional linear subspace. Based on this observation, we develop a method for simultaneous estimation of optical ow across multiple frames, which uses these subspace constraints. The multiframe sub ..."
Abstract

Cited by 78 (2 self)
 Add to MetaCart
We show that the set of all ow elds in a sequence of frames imaging a rigid scene resides in a lowdimensional linear subspace. Based on this observation, we develop a method for simultaneous estimation of optical ow across multiple frames, which uses these subspace constraints. The multiframe subspace constraints are strong constraints, and replace commonly used heuristic constraints, such as spatial or temporal smoothness. The subspace constraints are geometrically meaningful, and are not violated at depth discontinuities, or when the cameramotion changes abruptly. Furthermore, we show that the subspace constraints on ow elds apply for a variety of imaging models, scene models, and motion models. Hence, the presented approach forconstrained multiframe ow estimation is general. However, our approach doesnot require prior knowledge of the underlying world or camera model. Although linear subspace constraints have been used successfully in the past for recovering 3D information (e.g., [18]), it has been assumed that 2D correspondences are given. However, correspondence estimation is a fundamental problem in motion analysis. In this paper, we usemultiframe subspace constraints to constrain the 2D correspondence estimation process itself, and not for 3D recovery.
Motion estimation via dynamic vision
 IN PROC. EUROPEAN CONF. ON COMPUTER VISION
, 1996
"... Estimating the threedimensional motion of an object from a sequence of projections is of paramount importance in a variety of applications in control and robotics, such as autonomous navigation, manipulation, servo, tracking, docking, planning, and surveillance. Although “visual motion estimation” ..."
Abstract

Cited by 71 (8 self)
 Add to MetaCart
Estimating the threedimensional motion of an object from a sequence of projections is of paramount importance in a variety of applications in control and robotics, such as autonomous navigation, manipulation, servo, tracking, docking, planning, and surveillance. Although “visual motion estimation” is an old problem (the first formulations date back to the beginning of the century), only recently have tools from nonlinear systems estimation theory hinted at acceptable solutions. In this paper we formulate the visual motion estimation lproblem in terms of identification of nonlinear implicit systems with parameters on a topological manifold and propose a dynamic solution either in the local coordinates or in the embedding space of the parameter manifold. Such a formulation has structural advantages over previous recursive schemes, since the estimation of motion is decoupled from the estimation of the structure of
Comparison of Approaches to Egomotion Computation
 In CVPR
, 1996
"... We evaluated six algorithms for computing egomotion from image velocities. We established benchmarks for quantifying bias and sensitivity to noise, and for quantifying the convergence properties of those algorithms that require numerical search. Our simulation results reveal some interesting and sur ..."
Abstract

Cited by 59 (0 self)
 Add to MetaCart
We evaluated six algorithms for computing egomotion from image velocities. We established benchmarks for quantifying bias and sensitivity to noise, and for quantifying the convergence properties of those algorithms that require numerical search. Our simulation results reveal some interesting and surprising results. First, it is often written in the literature that the egomotion problem is difficult because translation (e.g., along the Xaxis) and rotation (e.g., about the Yaxis) produce similar image velocities. We found, to the contrary, that the bias and sensitivity of our six algorithms are totally invariant with respect to the axis of rotation. Second, it is also believed by some that fixating helps to make the egomotion problem easier. We found, to the contrary, that fixating does not help when the noise is independent of the image velocities. Fixation does help if the noise is proportional to speed, but this is only for the trivial reason that the speeds are slower under fixatio...
A Tensor Framework for Multidimensional Signal Processing
 Linkoping University, Sweden
, 1994
"... ii About the cover The figure on the cover shows a visualization of a symmetric tensor in three dimensions, G = λ1ê1ê T 1 + λ2ê2ê T 2 + λ3ê3ê T 3 The object in the figure is the sum of a spear, a plate and a sphere. The spear describes the principal direction of the tensor λ1ê1ê T 1, where the lengt ..."
Abstract

Cited by 53 (8 self)
 Add to MetaCart
ii About the cover The figure on the cover shows a visualization of a symmetric tensor in three dimensions, G = λ1ê1ê T 1 + λ2ê2ê T 2 + λ3ê3ê T 3 The object in the figure is the sum of a spear, a plate and a sphere. The spear describes the principal direction of the tensor λ1ê1ê T 1, where the length is proportional to the largest eigenvalue, λ1. The plate describes the plane spanned by the eigenvectors corresponding to the two largest eigenvalues, λ2(ê1ê T 1 + ê2ê T 2). The sphere, with a radius proportional to the smallest eigenvalue, shows how isotropic the tensor is, λ3(ê1ê T 1 + ê2ê T 2 + ê3ê T 3). The visualization is done using AVS [WWW94]. I am very grateful to Johan Wiklund for implementing the tensor viewer module used. This thesis deals with filtering of multidimensional signals. A large part of the thesis is devoted to a novel filtering method termed “Normalized convolution”. The method performs local expansion of a signal in a chosen filter basis which
3D Structure from 2D Motion
 IEEE Signal Processing Magazine
, 1999
"... this paper to delve into this formalism, further reading can be found in [41] [45]. In the following, we shall discuss its practical implementation and implications in the SfM techniques that have adopted it. ..."
Abstract

Cited by 51 (1 self)
 Add to MetaCart
this paper to delve into this formalism, further reading can be found in [41] [45]. In the following, we shall discuss its practical implementation and implications in the SfM techniques that have adopted it.
Determining the egomotion of an uncalibrated camera from instantaneous optical flow
 Journal of the Optical Society of America A
, 1997
"... Abstract. The main result of this paper is a procedure for selfcalibration of a moving camera from instantaneous optical ow. Under certain assumptions, this procedure allows the egomotion and some intrinsic parameters of the camera to be determined solely from the instantaneous positions and veloc ..."
Abstract

Cited by 44 (24 self)
 Add to MetaCart
Abstract. The main result of this paper is a procedure for selfcalibration of a moving camera from instantaneous optical ow. Under certain assumptions, this procedure allows the egomotion and some intrinsic parameters of the camera to be determined solely from the instantaneous positions and velocities of a set of image features. The proposed method relies upon the use of a di erential epipolar equation that relates optical ow to the egomotion and internal geometry of the camera. The paper presents a detailed derivation of this equation. This aspect of the work may be seen as a recasting into an analytical framework of the pivotal research ofVieville and Faugeras. 1 The information about the camera's egomotion and internal geometry enters the di erential epipolar equation via two matrices. It emerges that the optical ow determines the composite ratio of some of the entries of the two matrices. It is shown that a camera with unknown focal length undergoing arbitrary motion can be selfcalibrated via closedform expressions in the composite ratio. The corresponding formulae specify ve egomotion parameters, as well as the focal length and its derivative. An accompanying procedure is presented for reconstructing the viewed scene, up to scale, from the derived selfcalibration data and the optical ow data. Experimental results are given to demonstrate the correctness of the approach. 1.
Linear subspace methods for recovering translation direction. Spatial Vision in Humans and Robots
, 1993
"... The image motion eld for an observer moving through a static environment depends on the observer's translational and rotational velocities along with the distances to surface points. Given such a motion eld as input we haverecently introduced subspace methods for the recovery of the observer's motio ..."
Abstract

Cited by 39 (1 self)
 Add to MetaCart
The image motion eld for an observer moving through a static environment depends on the observer's translational and rotational velocities along with the distances to surface points. Given such a motion eld as input we haverecently introduced subspace methods for the recovery of the observer's motion and the depth structure of the scene. This class of methods involve splitting the equations describing the motion eld into separate equations for the observer's translational direction, the rotational velocity, and the relative depths. The resulting equations can then be solved successively, beginning with the equations for the translational direction. Here we concentrate on this rst step. In earlier work, a linear method was shown to provide a biased estimate of the translational direction. We discuss the source of this bias and show howit can be e ectively removed. The consequence is that the observer's velocity and the relative depths to points in the scene can all be recovered by successively solving three linear problems.
Linear differential algorithm for motion recovery: A geometric approach
 International Journal of Computer Vision
, 2000
"... The aim of this paper is to explore a linear geometric algorithm for recovering the three dimensional motion of a moving camera from image velocities. Generic similarities and differences between the discrete approach and the differential approach are clearly revealed through a parallel development ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
The aim of this paper is to explore a linear geometric algorithm for recovering the three dimensional motion of a moving camera from image velocities. Generic similarities and differences between the discrete approach and the differential approach are clearly revealed through a parallel development of an analogous motion estimation theory previously explored in [24, 26]. We present a precise characterization of the space of differential essential matrices, which gives rise to a novel eigenvaluedecompositionbased 3D velocity estimation algorithm from the optical flow measurements. This algorithm gives a unique solution to the motion estimation problem and serves as a differential counterpart of the wellknown SVDbased 3D displacement estimation algorithm for the discrete case. Since the proposed algorithm only involves linear algebra techniques, it may be used to provide a fast initial guess for more sophisticated nonlinear algorithms [13]. Extensive simulation results are presented for evaluating the performance of our algorithm in terms of bias and sensitivity of the estimates with respect to di erent noise levels in image velocity measurements.