Results 1  10
of
33
Good features to track
, 1994
"... No featurebased vision system can work unless good features can be identified and tracked from frame to frame. Although tracking itself is by and large a solved problem, selecting features that can be tracked well and correspond to physical points in the world is still hard. We propose a feature se ..."
Abstract

Cited by 2037 (14 self)
 Add to MetaCart
(Show Context)
No featurebased vision system can work unless good features can be identified and tracked from frame to frame. Although tracking itself is by and large a solved problem, selecting features that can be tracked well and correspond to physical points in the world is still hard. We propose a feature selection criterion that is optimal by construction because it is based on how the tracker works, and a feature monitoring method that can detect occlusions, disocclusions, and features that do not correspond to points in the world. These methods are based on a new tracking algorithm that extends previous NewtonRaphson style search methods to work under affine image transformations. We test performance with several simulations and experiments.
Bundle Adjustment  A Modern Synthesis
 VISION ALGORITHMS: THEORY AND PRACTICE, LNCS
, 2000
"... This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics c ..."
Abstract

Cited by 556 (12 self)
 Add to MetaCart
(Show Context)
This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics covered include: the choice of cost function and robustness; numerical optimization including sparse Newton methods, linearly convergent approximations, updating and recursive methods; gauge (datum) invariance; and quality control. The theory is developed for general robust cost functions rather than restricting attention to traditional nonlinear least squares.
Parameter Estimation Techniques: A Tutorial with Application to Conic Fitting
, 1995
"... Almost all problems in computer vision are related in one form or another to the problem of estimating parameters from noisy data. In this tutorial, we present what is probably the most commonly used techniques for parameter estimation. These include linear leastsquares (pseudoinverse and eigen a ..."
Abstract

Cited by 276 (8 self)
 Add to MetaCart
Almost all problems in computer vision are related in one form or another to the problem of estimating parameters from noisy data. In this tutorial, we present what is probably the most commonly used techniques for parameter estimation. These include linear leastsquares (pseudoinverse and eigen analysis); orthogonal leastsquares; gradientweighted leastsquares; biascorrected renormalization; Kalman filtering; and robust techniques (clustering, regression diagnostics, Mestimators, least median of squares). Particular attention has been devoted to discussions about the choice of appropriate minimization criteria and the robustness of the different techniques. Their application to conic fitting is described.
Relative Orientation
 International Journal of Computer Vision
, 1990
"... Abstract: Before corresponding points in images taken with two cameras can be used to recover distances to objects in a scene, one has to determine the position and orientation of one camera relative to the other. This is the classic photogrammetric problem of relative orientation, central to the in ..."
Abstract

Cited by 150 (2 self)
 Add to MetaCart
Abstract: Before corresponding points in images taken with two cameras can be used to recover distances to objects in a scene, one has to determine the position and orientation of one camera relative to the other. This is the classic photogrammetric problem of relative orientation, central to the interpretation of binocular stereo information. Iterative methods for determining relative orientation were developed long ago; without them we would not have most of the topographic maps we do today. Relative orientation is also of importance in the recovery of motion and shape from an image sequence when successive frames are widely separated in time. Workers in motion vision are rediscovering some of the methods of photogrammetry. Described here is a simple iterative scheme for recovering relative orientation that, unlike existing methods, does not require a good initial guess for the baseline and the rotation. The data required is a pair of bundles of corresponding rays from the two projection centers to points in the scene. It is well known that at least five pairs of rays are needed. Less appears to be known about the existence of multiple solutions and their interpretation. These issues are discussed here. The unambiguous determination of all of the parameters of relative orientation is not possible when the observed points lie on a critical surface. These surfaces and their degenerate forms are analysed as well.
A Framework for Uncertainty and Validation of 3D Registration Methods based on Points and Frames
 Int. Journal of Computer Vision
, 1997
"... In this paper, we propose and analyze several methods to estimate a rigid transformation from a set of 3D matched points or matched frames, which are important features in geometric algorithms. We also develop tools to predict and verify the accuracy of these estimations. The theoretical contributi ..."
Abstract

Cited by 83 (27 self)
 Add to MetaCart
(Show Context)
In this paper, we propose and analyze several methods to estimate a rigid transformation from a set of 3D matched points or matched frames, which are important features in geometric algorithms. We also develop tools to predict and verify the accuracy of these estimations. The theoretical contributions are: an intrinsic model of noise for transformations based on composition rather than addition; a unified formalism for the estimation of both the rigid transformation and its covariance matrix for points or frames correspondences, and a statistical validation method to verify the error estimation, which applies even when no "ground truth" is available. We analyze and demonstrate on synthetic data that our scheme is well behaved. The practical contribution of the paper is the validation of our transformation estimation method in the case of 3D medical images, which shows that an accuracy of the registration far below the size of a voxel can be achieved, and in the case of protein substructure matching, where frame features drastically improve both selectivity and complexity. 1.
Generalizing epipolarplane image analysis on the spatiotemporal surface
 In IJCV
, 1989
"... The previous implementations of our EpipolarPlane Image Analysis mapping technique demonstrated the feasibility and benefits of the approach, but were carried out for restricted camera geometries. The question of more general geometries made the technique's utility for autonomous navigation un ..."
Abstract

Cited by 60 (0 self)
 Add to MetaCart
(Show Context)
The previous implementations of our EpipolarPlane Image Analysis mapping technique demonstrated the feasibility and benefits of the approach, but were carried out for restricted camera geometries. The question of more general geometries made the technique's utility for autonomous navigation uncertain. We have developed a generalization of our analysis that (a) enables varying view direction, including variation over time (b) provides threedimensional connectivity information for building coherent spatial descriptions of observed objects; and (c) operates sequentially, allowing initiation and refinement of scene feature estimates while the sensor is in motion. To implement this generalization it was necessary to develop an explicit description of the evolution of images over time. We have achieved this by building a process that creates a set of twodimensional manifolds defined at the zeros of a threedimensional spatiotemporal Laplacian. These manifolds represent explicitly both the spatial and temporal structure of the temporally evolving imagery, and we term them spatiotemporal surfaces. The surfaces are constructed incrementally, as the images are acquired. We describe a tracking mechanism that operates locally on these evolving surfaces in carrying out threedimensional scene reconstruction.
K.: Do we really have to consider covariance matrices for image features
 In: Proceedings of the 8th IEEE International Conference on Computer Vision (ICCV
, 2001
"... We first describe in a unified way how to compute the covariance matrix from the gray levels of the image. We then experimentally investigate whether or not the computed covariance matrix actually reflects the accuracy of the feature position by doing subpixel correction using variable template mat ..."
Abstract

Cited by 56 (8 self)
 Add to MetaCart
We first describe in a unified way how to compute the covariance matrix from the gray levels of the image. We then experimentally investigate whether or not the computed covariance matrix actually reflects the accuracy of the feature position by doing subpixel correction using variable template matching. We also test if the accuracy of the homography and the fundamental matrix can really be improved by optimization using the covariance matrix computed from the gray levels. © 2002 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 86(1): 1–10, 2003; Published online in Wiley InterScience (www.interscience.wiley. com). DOI 10.1002/ecjc.10042 Key words: feature extraction; covariance matrix; template matching; homography; fundamental matrix. 1.
10 Pros and cons against performance characterization of vision algorithms", Work shop on performance characterization of vision algorithms
, 1996
"... The paper discusses objections against performance characterization of vision algorithms and explains their motivation. Short and longterm arguments are given which overcome these objections. The methodology for performance characterization is sketched to demonstrate the feasibility of empirical te ..."
Abstract

Cited by 42 (4 self)
 Add to MetaCart
(Show Context)
The paper discusses objections against performance characterization of vision algorithms and explains their motivation. Short and longterm arguments are given which overcome these objections. The methodology for performance characterization is sketched to demonstrate the feasibility of empirical testing of vision algorithms.
Accurate Projective Reconstruction
, 1993
"... . It is possible to recover the threedimensional structure of a scene using images taken with uncalibrated cameras and pixel correspondences. But such a reconstruction can only be computed up to a projective transformation of the 3D space. Therefore, constraints have to be added to the reconstructe ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
. It is possible to recover the threedimensional structure of a scene using images taken with uncalibrated cameras and pixel correspondences. But such a reconstruction can only be computed up to a projective transformation of the 3D space. Therefore, constraints have to be added to the reconstructed data in order to get the reconstruction in the euclidean space. Such constraints arise from knowledge of the scene: location of points, geometrical constraints on lines, etc. We first discuss here the type of constraints that have to be added then we show how they can be fed into a general framework. Experiments prove that the accuracy needed for industrial applications is reachable when measurements in the image have subpixel accuracy. Therefore, we show how a real camera can be mapped into an accurate projective camera and how accurate point detection improve the reconstruction results. 1 Introduction One of the principal goals of research in computer vision is to enable machines to per...
Active Selfcalibration of Robotic Eyes and Handeye Relationships with Model Identification
, 1998
"... In this paper, we first review research results of camera selfcalibration achieved in photogrammetry, robotics and computer vision. Then we propose a method for selfcalibration of robotic hand cameras by means of active motion. Through tracking a set of world points of unknown coordinates during ro ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
In this paper, we first review research results of camera selfcalibration achieved in photogrammetry, robotics and computer vision. Then we propose a method for selfcalibration of robotic hand cameras by means of active motion. Through tracking a set of world points of unknown coordinates during robot motion, the internal parameters of the cameras (including distortions), the mounting parameters as well as the coordinates of the world points are estimated. The approach is fully autonomous, in that no initial guesses of the unknown parameters are to be provided from the outside by humans for the solution of a set of nonlinear equations. Sufficient conditions for a unique solution are derived in terms of controlled motion sequences. Methods to improve accuracy and robustness are proposed by means of best model identification and motion planning. Experimental results in both a simulated and a real environments are reported. Key words: selfcalibration, handcameras, handeye calibration...