Results 1  10
of
163
Determining the Epipolar Geometry and its Uncertainty: A Review
 International Journal of Computer Vision
, 1998
"... Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two i ..."
Abstract

Cited by 400 (9 self)
 Add to MetaCart
Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3&times;3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, and its determination is very important in many applications such as scene modeling and vehicle navigation. This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and its uncertainty. A wellfounded measure is proposed to compare these techniques. Projective reconstruction is also reviewed. The software which we have developed for this review is available on the Internet.
Unified inverse depth parametrization for monocular slam
 In Proceedings of Robotics: Science and Systems
, 2006
"... Abstract—We present a new parametrization for point features within monocular simultaneous localization and mapping (SLAM) that permits efficient and accurate representation of uncertainty during undelayed initialization and beyond, all within the standard extended Kalman filter (EKF). The key conce ..."
Abstract

Cited by 190 (19 self)
 Add to MetaCart
(Show Context)
Abstract—We present a new parametrization for point features within monocular simultaneous localization and mapping (SLAM) that permits efficient and accurate representation of uncertainty during undelayed initialization and beyond, all within the standard extended Kalman filter (EKF). The key concept is direct parametrization of the inverse depth of features relative to the camera locations from which they were first viewed, which produces measurement equations with a high degree of
Motion estimation via dynamic vision
 IN PROC. EUROPEAN CONF. ON COMPUTER VISION
, 1996
"... Estimating the threedimensional motion of an object from a sequence of projections is of paramount importance in a variety of applications in control and robotics, such as autonomous navigation, manipulation, servo, tracking, docking, planning, and surveillance. Although “visual motion estimation” ..."
Abstract

Cited by 102 (9 self)
 Add to MetaCart
Estimating the threedimensional motion of an object from a sequence of projections is of paramount importance in a variety of applications in control and robotics, such as autonomous navigation, manipulation, servo, tracking, docking, planning, and surveillance. Although “visual motion estimation” is an old problem (the first formulations date back to the beginning of the century), only recently have tools from nonlinear systems estimation theory hinted at acceptable solutions. In this paper we formulate the visual motion estimation lproblem in terms of identification of nonlinear implicit systems with parameters on a topological manifold and propose a dynamic solution either in the local coordinates or in the embedding space of the parameter manifold. Such a formulation has structural advantages over previous recursive schemes, since the estimation of motion is decoupled from the estimation of the structure of
MultiFrame Optical Flow Estimation Using Subspace Constraints
, 1999
"... We show that the set of all ow elds in a sequence of frames imaging a rigid scene resides in a lowdimensional linear subspace. Based on this observation, we develop a method for simultaneous estimation of optical ow across multiple frames, which uses these subspace constraints. The multiframe sub ..."
Abstract

Cited by 94 (2 self)
 Add to MetaCart
(Show Context)
We show that the set of all ow elds in a sequence of frames imaging a rigid scene resides in a lowdimensional linear subspace. Based on this observation, we develop a method for simultaneous estimation of optical ow across multiple frames, which uses these subspace constraints. The multiframe subspace constraints are strong constraints, and replace commonly used heuristic constraints, such as spatial or temporal smoothness. The subspace constraints are geometrically meaningful, and are not violated at depth discontinuities, or when the cameramotion changes abruptly. Furthermore, we show that the subspace constraints on ow elds apply for a variety of imaging models, scene models, and motion models. Hence, the presented approach forconstrained multiframe ow estimation is general. However, our approach doesnot require prior knowledge of the underlying world or camera model. Although linear subspace constraints have been used successfully in the past for recovering 3D information (e.g., [18]), it has been assumed that 2D correspondences are given. However, correspondence estimation is a fundamental problem in motion analysis. In this paper, we usemultiframe subspace constraints to constrain the 2D correspondence estimation process itself, and not for 3D recovery.
Comparison of Approaches to Egomotion Computation
 In CVPR
, 1996
"... We evaluated six algorithms for computing egomotion from image velocities. We established benchmarks for quantifying bias and sensitivity to noise, and for quantifying the convergence properties of those algorithms that require numerical search. Our simulation results reveal some interesting and sur ..."
Abstract

Cited by 85 (0 self)
 Add to MetaCart
(Show Context)
We evaluated six algorithms for computing egomotion from image velocities. We established benchmarks for quantifying bias and sensitivity to noise, and for quantifying the convergence properties of those algorithms that require numerical search. Our simulation results reveal some interesting and surprising results. First, it is often written in the literature that the egomotion problem is difficult because translation (e.g., along the Xaxis) and rotation (e.g., about the Yaxis) produce similar image velocities. We found, to the contrary, that the bias and sensitivity of our six algorithms are totally invariant with respect to the axis of rotation. Second, it is also believed by some that fixating helps to make the egomotion problem easier. We found, to the contrary, that fixating does not help when the noise is independent of the image velocities. Fixation does help if the noise is proportional to speed, but this is only for the trivial reason that the speeds are slower under fixatio...
3D Structure from 2D Motion
 IEEE Signal Processing Magazine
, 1999
"... this paper to delve into this formalism, further reading can be found in [41] [45]. In the following, we shall discuss its practical implementation and implications in the SfM techniques that have adopted it. ..."
Abstract

Cited by 68 (1 self)
 Add to MetaCart
(Show Context)
this paper to delve into this formalism, further reading can be found in [41] [45]. In the following, we shall discuss its practical implementation and implications in the SfM techniques that have adopted it.
A Tensor Framework for Multidimensional Signal Processing
 Linkoping University, Sweden
, 1994
"... ii About the cover The figure on the cover shows a visualization of a symmetric tensor in three dimensions, G = λ1ê1ê T 1 + λ2ê2ê T 2 + λ3ê3ê T 3 The object in the figure is the sum of a spear, a plate and a sphere. The spear describes the principal direction of the tensor λ1ê1ê T 1, where the lengt ..."
Abstract

Cited by 66 (8 self)
 Add to MetaCart
ii About the cover The figure on the cover shows a visualization of a symmetric tensor in three dimensions, G = λ1ê1ê T 1 + λ2ê2ê T 2 + λ3ê3ê T 3 The object in the figure is the sum of a spear, a plate and a sphere. The spear describes the principal direction of the tensor λ1ê1ê T 1, where the length is proportional to the largest eigenvalue, λ1. The plate describes the plane spanned by the eigenvectors corresponding to the two largest eigenvalues, λ2(ê1ê T 1 + ê2ê T 2). The sphere, with a radius proportional to the smallest eigenvalue, shows how isotropic the tensor is, λ3(ê1ê T 1 + ê2ê T 2 + ê3ê T 3). The visualization is done using AVS [WWW94]. I am very grateful to Johan Wiklund for implementing the tensor viewer module used. This thesis deals with filtering of multidimensional signals. A large part of the thesis is devoted to a novel filtering method termed “Normalized convolution”. The method performs local expansion of a signal in a chosen filter basis which
Determining the egomotion of an uncalibrated camera from instantaneous optical flow
 Journal of the Optical Society of America A
, 1997
"... Abstract. The main result of this paper is a procedure for selfcalibration of a moving camera from instantaneous optical ow. Under certain assumptions, this procedure allows the egomotion and some intrinsic parameters of the camera to be determined solely from the instantaneous positions and veloc ..."
Abstract

Cited by 58 (26 self)
 Add to MetaCart
Abstract. The main result of this paper is a procedure for selfcalibration of a moving camera from instantaneous optical ow. Under certain assumptions, this procedure allows the egomotion and some intrinsic parameters of the camera to be determined solely from the instantaneous positions and velocities of a set of image features. The proposed method relies upon the use of a di erential epipolar equation that relates optical ow to the egomotion and internal geometry of the camera. The paper presents a detailed derivation of this equation. This aspect of the work may be seen as a recasting into an analytical framework of the pivotal research ofVieville and Faugeras. 1 The information about the camera's egomotion and internal geometry enters the di erential epipolar equation via two matrices. It emerges that the optical ow determines the composite ratio of some of the entries of the two matrices. It is shown that a camera with unknown focal length undergoing arbitrary motion can be selfcalibrated via closedform expressions in the composite ratio. The corresponding formulae specify ve egomotion parameters, as well as the focal length and its derivative. An accompanying procedure is presented for reconstructing the viewed scene, up to scale, from the derived selfcalibration data and the optical ow data. Experimental results are given to demonstrate the correctness of the approach. 1.
MultiFrame Correspondence Estimation Using Subspace Constraints
, 2002
"... When a rigid scene is imaged by a moving camera, the set of all displacements of all points across multiple frames often resides in a lowdimensional linear subspace. Linear subspace constraints have been used successfully in the past for recovering 3D structure and 3D motion information from multi ..."
Abstract

Cited by 45 (8 self)
 Add to MetaCart
When a rigid scene is imaged by a moving camera, the set of all displacements of all points across multiple frames often resides in a lowdimensional linear subspace. Linear subspace constraints have been used successfully in the past for recovering 3D structure and 3D motion information from multiple frames (e.g., by using the factorization method of Tomasi and Kanade (1992, International Journal of Computer Vision, 9:137–154)). These methods assume that the 2D correspondences have been precomputed. However, correspondence estimation is a fundamental problem in motion analysis. In this paper we show how the multiframe subspace constraints can be used for constraining the 2D correspondence estimation process itself. We show that the multiframe subspace constraints are valid not only for affine cameras, but also for a variety of imaging models, scene models, and motion models. The multiframe subspace constraints are first translated from constraints on correspondences to constraints directly on image measurements (e.g., image brightness quantities). These brightnessbased subspace constraints are then used for estimating the correspondences, by requiring that all corresponding points across all video frames reside in the appropriate lowdimensional linear subspace. The multiframe subspace constraints are geometrically meaningful, and are not violated at depth discontinuities, nor when the cameramotion changes abruptly. These constraints can therefore replace heuristic constraints commonly used in opticalflow estimation, such as spatial or temporal smoothness.
Optical Flow Estimation
, 2005
"... This chapter provides a tutorial introduction to gradientbased optical flow estimation. We discuss leastsquares and robust estimators, iterative coarsetofine refinement, different forms of parametric motion models, different conservation assumptions, probabilistic formulations, and robust mixtur ..."
Abstract

Cited by 42 (4 self)
 Add to MetaCart
This chapter provides a tutorial introduction to gradientbased optical flow estimation. We discuss leastsquares and robust estimators, iterative coarsetofine refinement, different forms of parametric motion models, different conservation assumptions, probabilistic formulations, and robust mixture models.