Results 1  10
of
29
Iterative point matching for registration of freeform curves and surfaces
, 1994
"... A heuristic method has been developed for registering two sets of 3D curves obtained by using an edgebased stereo system, or two dense 3D maps obtained by using a correlationbased stereo system. Geometric matching in general is a difficult unsolved problem in computer vision. Fortunately, in ma ..."
Abstract

Cited by 486 (6 self)
 Add to MetaCart
A heuristic method has been developed for registering two sets of 3D curves obtained by using an edgebased stereo system, or two dense 3D maps obtained by using a correlationbased stereo system. Geometric matching in general is a difficult unsolved problem in computer vision. Fortunately, in many practical applications, some a priori knowledge exists which considerably simplifies the problem. In visual navigation, for example, the motion between successive positions is usually approximately known. From this initial estimate, our algorithm computes observer motion with very good precision, which is required for environment modeling (e.g., building a Digital Elevation Map). Objects are represented by a set of 3D points, which are considered as the samples of a surface. No constraint is imposed on the form of the objects. The proposed algorithm is based on iteratively matching points in one set to the closest points in the other. A statistical method based on the distance distribution is used to deal with outliers, occlusion, appearance and disappearance, which allows us to do subsetsubset matching. A leastsquares technique is used to estimate 3D motion from the point correspondences, which reduces the average distance between points in the two sets. Both synthetic and real data have been used to test the algorithm, and the results show that it is efficient and robust, and yields an accurate motion estimate.
Trajectory Generation From Noisy Positions of Object Features for Teaching Robot Paths
, 1993
"... In this paper we discuss a method for generating a trajectory describing robot path using a sequence of noisy positions of features belonging to a moving object obtained from a robot's sensor system. In order to accurately estimate this trajectory, we show how uncertainties in the positions of objec ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
In this paper we discuss a method for generating a trajectory describing robot path using a sequence of noisy positions of features belonging to a moving object obtained from a robot's sensor system. In order to accurately estimate this trajectory, we show how uncertainties in the positions of object feature points can be converted into uncertainties in parameters describing the object pose (3D position and orientation). Noisy estimations of object poses, together with their uncertainties, are then used as an input to an algorithm that approximates the desired trajectory. The algorithm is based on natural vector splines and belongs to a family of nonparametric regression techniques which enable the estimation of the trajectory without requiring its functional form to be known. Since dilemma between specifying the trajectory either in Cartesian or in joint coordinates always exists, we present both alternatives. Some simulation results are given which illustrate the accuracy of the ap...
Optimal Rigid Motion Estimation and Performance Evaluation with Bootstrap
 Proc. Conf. Computer Vision and Pattern Recognition , Fort Collins Co
, 1999
"... A new method for 3D rigid motion estimation is derived under the most general assumption that the measurements are corrupted by inhomogeneous and anisotropic, i.e., heteroscedastic noise. This is the case, for example, when the motion of a calibrated stereohead is to be determined from image pairs. ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
A new method for 3D rigid motion estimation is derived under the most general assumption that the measurements are corrupted by inhomogeneous and anisotropic, i.e., heteroscedastic noise. This is the case, for example, when the motion of a calibrated stereohead is to be determined from image pairs. Linearization in the quaternion space transforms the problem into a multivariate, heteroscedastic errorsin variables (HEIV) regression, from which the rotation and translation estimates are obtained simultaneously. The significant performance improvementisillustrated, for real data, by comparison with the results of quaternion, subspace and renormalization basedapproaches described in the literature. Extensive use is made of bootstrap, an advanced numerical tool from statistics, both to estimate the covariances of the 3D data points and to obtain confidence regions for the rotation and translation estimates. Bootstrap enables an accurate recovery of these information using only the two image pairs serving as input.
Performance characterisation in computer vision: The role of statistics in testing and design
 Imaging and Vision Systems: Theory, Assessment and Applications. NOVA Science Books
, 1993
"... We consider the relationship between the performance characteristics of vision algorithms and algorithm design. In the first part we discuss the issues involved in testing. A description of good practice is given covering test objectives, test data, test metrics and the test protocol. In the second ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
We consider the relationship between the performance characteristics of vision algorithms and algorithm design. In the first part we discuss the issues involved in testing. A description of good practice is given covering test objectives, test data, test metrics and the test protocol. In the second part we discuss aspects of good algorithmic design including understanding of the statistical properties of data and common algorithmic operations, and suggest how some common problems may be overcome. 1
Stereo Depth Estimation: A Confidence Interval Approach
 In Proc. of ICCV
, 1998
"... We describe an estimation technique which, given a measurement of the depth of a target from a widefieldof view (WFOV) stereo camera pair, produces a minimax risk fixedsize confidence interval estimate for the target depth. This work constitutes the first application to the computer vision domain ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
We describe an estimation technique which, given a measurement of the depth of a target from a widefieldof view (WFOV) stereo camera pair, produces a minimax risk fixedsize confidence interval estimate for the target depth. This work constitutes the first application to the computer vision domain of optimal fixedsize confidenceinterval decision theory. The approach is evaluated in terms of theoretical capture probability and empirical capture frequency during actual experiments with a target on an optical bench. The method is compared to several other procedures including the Kalman Filter. The minimax approach is found to dominate all the other methods in performance. In particular, for the minimax approach, a very close agreement is achieved between theoretical capture probability and empirical capture frequency. This allows performance to be accurately predicted, greatly facilitating the system design, and delineating the tasks that may be performed with a given system. 1 Intro...
A Stereo Confidence Metric Using Single View Imagery
 PROC. VISION INTERFACE
, 2002
"... Although stereo vision research has progressed remarkably, stereo systems still need a fast, accurate way to estimate confidence in their output. In the current paper, we explore using stereo performance on two different images from a single view as a confidence measure for a binocular stereo system ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Although stereo vision research has progressed remarkably, stereo systems still need a fast, accurate way to estimate confidence in their output. In the current paper, we explore using stereo performance on two different images from a single view as a confidence measure for a binocular stereo system incorporating that single view. Although it seems counterintuitive to search for correspondence in two different images from the same view, such a search gives us precise quantitative performance data. Correspondences significantly far from the same location are erroneous because there is little to no motion between the two images. Using handgenerated ground truth, we quantitatively compare this new confidence metric with five commonly used confidence metrics. We explore the performance characteristics of each metric under a variety of conditions.
Sensor Errors and the Uncertainties in Stereo Reconstruction
 Empirical Evaluation Techniques in Computer Vision
, 1998
"... An important objective in the evaluation of algorithms with sensory inputs is the development of measures characterizing the intrinsic errors in the results. Intrinsic are those errors which are caused by noise in the input data. The particular application which we consider is 3D reconstruction fro ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
An important objective in the evaluation of algorithms with sensory inputs is the development of measures characterizing the intrinsic errors in the results. Intrinsic are those errors which are caused by noise in the input data. The particular application which we consider is 3D reconstruction from stereo. We demonstrate that a radiometric correction of the images could improve signi cantly the accuracy. We propose a con dence interval approach for quantifying the precision. We also illustrate the use of the con dence intervals for the rejection of unreliable 3D points.
Error analysis of 3D shape construction from structured lighting
 Pattern Recognition
, 1996
"... Abstract In this paper, we present a detailed model and analysis of several error sources and thier effects on measuring threedimensional (3D) surface properties using the structured lighting technique. The analysis is based on a general system configuration and identifies three types of error surc ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Abstract In this paper, we present a detailed model and analysis of several error sources and thier effects on measuring threedimensional (3D) surface properties using the structured lighting technique. The analysis is based on a general system configuration and identifies three types of error surcessystem modeling error, image processing error and experimental error. Absolute and relative error bounds in obtaining 3D surface orientation and curvature measurements using structured lighting are derived in terms of the system parameters and likely error sources. In addition to the quantization error, other likely error sources in system modeling and experimental setup are also considered. Even though our analysis is on structured lighting, the results are readily applicable to other triangulationbased techniques such as stereopsis. Finally, our analysis focuses on error in inferring surface orientation and principal surface curvature. Such analyses, to our knowledge, have never been attempted before. Image processing Structured light Orientation Curvature Error analysis 1.
Performance characterization in computer vision: A guide to best practices
, 2007
"... It is frequently remarked that designers of computer vision algorithms and systems cannot reliably predict how algorithms will respond to new problems. A variety of reasons have been given for this situation and a variety of remedies prescribed in literature. Most of these involve, in some way, payi ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
It is frequently remarked that designers of computer vision algorithms and systems cannot reliably predict how algorithms will respond to new problems. A variety of reasons have been given for this situation and a variety of remedies prescribed in literature. Most of these involve, in some way, paying greater attention to the domain of the problem and to performing detailed empirical analysis. The goal of this paper is to review what we see as current best practices in these areas and also suggest refinements that may benefit the field of computer vision. A distinction is made between the historical emphasis on algorithmic novelty and the increasing importance of validation on particular data sets and problems.
Point Reconstruction from Noisy Images
 Journal of Mathematical Imaging and Vision
, 1995
"... In this paper we treat the problem of determining optimally (in the leastsquares sense) the 3D coordinates of a point, given its noisy images formed by any number of cameras of known geometry. The optimality criterion is determined by the covariance matrices associated with the images of the point. ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
In this paper we treat the problem of determining optimally (in the leastsquares sense) the 3D coordinates of a point, given its noisy images formed by any number of cameras of known geometry. The optimality criterion is determined by the covariance matrices associated with the images of the point. The covariance matrices are not restricted to be positive definite but are allowed to be singular. Thus, image points constrained to lie along straight lines can be handled as well. Estimation of the covariance of the reconstructed point is provided. The often appearing twocamera stereo case is treated in detail. It is shown in this case that, under reasonable conditions, the main step of the reconstruction reduces to finding the unique zero of a sixth degree polynomial in the interval (0; 1): Keywords: stereo, reconstruction, leastsquares estimate, covariance matrix. The authors are listed in random order i 1 1 Introduction The reconstruction problem, when errors are present, has be...