Results 1  10
of
100
A Tutorial on Visual Servo Control
 IEEE Transactions on Robotics and Automation
, 1996
"... This paper provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review ..."
Abstract

Cited by 822 (25 self)
 Add to MetaCart
This paper provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process. We then present a taxonomy of visual servo control systems. The two major classes of systems, positionbased and imagebased systems, are then discussed. Since any visual servo system must be capable of tracking image features in a sequence of images, we include an overview of featurebased and correlationbased methods for tracking. We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control. 1 Introduction Today there are over 800,000 robots in the world, mostly working in factory environment...
The development and comparison of robust methods for estimating the fundamental matrix
 International Journal of Computer Vision
, 1997
"... Abstract. This paper has two goals. The first is to develop a variety of robust methods for the computation of the Fundamental Matrix, the calibrationfree representation of camera motion. The methods are drawn from the principal categories of robust estimators, viz. case deletion diagnostics, Mest ..."
Abstract

Cited by 263 (10 self)
 Add to MetaCart
(Show Context)
Abstract. This paper has two goals. The first is to develop a variety of robust methods for the computation of the Fundamental Matrix, the calibrationfree representation of camera motion. The methods are drawn from the principal categories of robust estimators, viz. case deletion diagnostics, Mestimators and random sampling, and the paper develops the theory required to apply them to nonlinear orthogonal regression problems. Although a considerable amount of interest has focussed on the application of robust estimation in computer vision, the relative merits of the many individual methods are unknown, leaving the potential practitioner to guess at their value. The second goal is therefore to compare and judge the methods. Comparative tests are carried out using correspondences generated both synthetically in a statistically controlled fashion and from feature matching in real imagery. In contrast with previously reported methods the goodness of fit to the synthetic observations is judged not in terms of the fit to the observations per se but in terms of fit to the ground truth. A variety of error measures are examined. The experiments allow a statistically satisfying and quasioptimal method to be synthesized, which is shown to be stable with up to 50 percent outlier contamination, and may still be used if there are more than 50 percent outliers. Performance bounds are established for the method, and a variety of robust methods to estimate the standard deviation of the error and covariance matrix of the parameters are examined. The results of the comparison have broad applicability to vision algorithms where the input data are corrupted not only by noise but also by gross outliers.
Robust parameter estimation in computer vision
 SIAM Reviews
, 1999
"... Abstract. Estimation techniques in computer vision applications must estimate accurate model parameters despite smallscale noise in the data, occasional largescale measurement errors (outliers), and measurements from multiple populations in the same data set. Increasingly, robust estimation techni ..."
Abstract

Cited by 162 (10 self)
 Add to MetaCart
(Show Context)
Abstract. Estimation techniques in computer vision applications must estimate accurate model parameters despite smallscale noise in the data, occasional largescale measurement errors (outliers), and measurements from multiple populations in the same data set. Increasingly, robust estimation techniques, some borrowed from the statistics literature and others described in the computer vision literature, have been used in solving these parameter estimation problems. Ideally, these techniques should effectively ignore the outliers and measurements from other populations, treating them as outliers, when estimating the parameters of a single population. Two frequently used techniques are leastmedian of
Realtime markerless tracking for augmented reality: the virtual visual servoing framework
 IEEE TRANS. ON VISUALIZATION AND COMPUTER GRAPHICS
, 2006
"... Tracking is a very important research subject in a realtime augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a realtime, robust, and efficient 3D modelbased tracking algorithm is proposed for ..."
Abstract

Cited by 114 (29 self)
 Add to MetaCart
(Show Context)
Tracking is a very important research subject in a realtime augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a realtime, robust, and efficient 3D modelbased tracking algorithm is proposed for a “video see through ” monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. Here, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of pointtocurves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide realtime tracking of points normal to the object contours. Robustness is obtained by integrating an Mestimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D modelfree augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.
3D Model Construction Using Range and Image Data
 In CVPR
, 2000
"... This paper deals with the automated creation of geometric and photometric correct 3D models of the world. Those models can be used for virtual reality, tele presence, digital cinematography and urban planning applications. The combination of range (dense depth estimates) and image sensing (color ..."
Abstract

Cited by 78 (4 self)
 Add to MetaCart
(Show Context)
This paper deals with the automated creation of geometric and photometric correct 3D models of the world. Those models can be used for virtual reality, tele presence, digital cinematography and urban planning applications. The combination of range (dense depth estimates) and image sensing (color information) provides datasets which allow us to create geometrically correct, photorealistic models of high quality. The 3D models are first built from range data using a volumetric set intersection method previously developed by us. Photometry can be mapped onto these models by registering features from both the 3D and 2D data sets. Range data segmentation algorithms have been developed to identify planar regions, determine linear features from planar intersections that can serve as features for registration with 2D imagery lines, and reduce the overall complexity of the models. Results are shown for building models of large buildings on our campus using real data acquired from m...
EPnP: An Accurate O(n) Solution to the PnP Problem
 INT J COMPUT VIS
, 2008
"... We propose a noniterative solution to the PnP problem—the estimation of the pose of a calibrated camera from n 3Dto2D point correspondences—whose computational complexity grows linearly with n. This is in contrast to stateoftheart methods that are O(n 5) or even O(n 8), without being more ac ..."
Abstract

Cited by 69 (4 self)
 Add to MetaCart
We propose a noniterative solution to the PnP problem—the estimation of the pose of a calibrated camera from n 3Dto2D point correspondences—whose computational complexity grows linearly with n. This is in contrast to stateoftheart methods that are O(n 5) or even O(n 8), without being more accurate. Our method is applicable for all n ≥ 4 and handles properly both planar and nonplanar configurations. Our central idea is to express the n 3D points as a weighted sum of four virtual control points. The problem then reduces to estimating the coordinates of these control points in the camera referential, which can be done in O(n) time by expressing these coordinates as weighted sum of the eigenvectors of a 12 × 12 matrix and solving a small constant number of quadratic equations to pick the right weights. Furthermore, if maximal precision is required, the output of the closedform solution can be used to initialize a GaussNewton scheme, which improves accuracy with negligible amount of additional time. The advantages of our method are demonstrated by thorough testing on both synthetic and realdata.
Localization methods for a mobile robot in urban environments
 IEEE Transactions on Robotics
, 2004
"... Abstract — This paper addresses the problems of building a functional mobile robot for urban site navigation and modeling with focus on keeping track of the robot location. We have developed a localization system that employs two methods. The first method uses odometry, a compass and tilt sensor, an ..."
Abstract

Cited by 62 (1 self)
 Add to MetaCart
(Show Context)
Abstract — This paper addresses the problems of building a functional mobile robot for urban site navigation and modeling with focus on keeping track of the robot location. We have developed a localization system that employs two methods. The first method uses odometry, a compass and tilt sensor, and a global positioning sensor. An extended Kalman filter integrates the sensor data and keeps track of the uncertainty associated with it. The second method is based on camera pose estimation. It is used when the uncertainty from the first method becomes very large. The pose estimation is done by matching linear features in the image with a simple and compact environmental model. We have demonstrated the functionality of the robot and the localization methods with realworld experiments. Index Terms — Mobile robots, localization, machine vision I.
Accurate NonIterative O(n) Solution to the PnP Problem
, 2007
"... We propose a noniterative solution to the PnP problem—the estimation of the pose of a calibrated camera from n 3Dto2D point correspondences—whose computational complexity grows linearly with n. This is in contrast to stateoftheart methods that are O(n 5) or even O(n 8), without being more accu ..."
Abstract

Cited by 56 (8 self)
 Add to MetaCart
(Show Context)
We propose a noniterative solution to the PnP problem—the estimation of the pose of a calibrated camera from n 3Dto2D point correspondences—whose computational complexity grows linearly with n. This is in contrast to stateoftheart methods that are O(n 5) or even O(n 8), without being more accurate. Our method is applicable for all n ≥ 4 and handles properly both planar and nonplanar configurations. Our central idea is to express the n 3D points as a weighted sum of four virtual control points. The problem then reduces to estimating the coordinates of these control points in the camera referential, which can be done in O(n) time by expressing these coordinates as weighted sum of the eigenvectors of a 12 × 12 matrix and solving a small constant number of quadratic equations to pick the right weights. The advantages of our method are demonstrated by thorough testing on both synthetic and realdata.
A RealTime Tracker For Markerless Augmented Reality
 In ACM/IEEE Int. Symp. on Mixed and Augmented Reality, ISMAR’03
, 2003
"... Augmented Reality has now progressed to the point where realtime applications are being considered and needed. At the same time it is important that synthetic elements are rendered and aligned in the scene in an accurate and visually acceptable way. In order to address these issues a realtime, rob ..."
Abstract

Cited by 55 (16 self)
 Add to MetaCart
(Show Context)
Augmented Reality has now progressed to the point where realtime applications are being considered and needed. At the same time it is important that synthetic elements are rendered and aligned in the scene in an accurate and visually acceptable way. In order to address these issues a realtime, robust and efficient 3D modelbased tracking algorithm is proposed for a 'video see through' monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. Here, nonlinear pose computation is formulated by means of a virtual visual servoing approach. In this context, the derivation of pointtocurves interaction matrices are given for different features including lines, circles, cylinders and spheres. A local moving edges tracker is used in order to provide realtime tracking of points normal to the object contours. A method is proposed for combining local position uncertainty and global pose uncertainty in an efficient and accurate way by propagating uncertainty. Robustness is obtained by integrating a Mestimator into the visual control law via an iteratively reweighted least squares implementation. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination and misstracking. 1.
Calibration Requirements and Procedures for a MonitorBased Augmented Reality System
 IEEE Trans. Visualization and Computer Graphics
, 1995
"... Augmented reality entails the use of models and their associated renderings to supplement information in a real scene. In order for this information to be relevant or meaningful, the models must be positioned and displayed in such away that they blend into the real world in terms of alignments, pers ..."
Abstract

Cited by 53 (10 self)
 Add to MetaCart
(Show Context)
Augmented reality entails the use of models and their associated renderings to supplement information in a real scene. In order for this information to be relevant or meaningful, the models must be positioned and displayed in such away that they blend into the real world in terms of alignments, perspectives, illuminations, etc. For practical reasons the information necessary to obtain this realistic blending cannot be known a priori, and cannot be hardwired into a system. Instead a number of calibration procedures are necessary so that the location and parameters of each of the system components are known. In this paper we identify the calibration steps necessary to build a computer model of the real world and then, using the monitorbased augmented reality system developed at ECRC (Grasp) as an example, we describe each of the calibration processes. These processes determine the internal parameters of our imaging devices (scan converter, frame grabber, and video camera), as well as the geometric transformations that relate all of the physical objects of the system to a known world coordinate system. 1