Results 1  10
of
126
A Tutorial on Visual Servo Control
 IEEE Transactions on Robotics and Automation
, 1996
"... This paper provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review ..."
Abstract

Cited by 778 (24 self)
 Add to MetaCart
This paper provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process. We then present a taxonomy of visual servo control systems. The two major classes of systems, positionbased and imagebased systems, are then discussed. Since any visual servo system must be capable of tracking image features in a sequence of images, we include an overview of featurebased and correlationbased methods for tracking. We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control. 1 Introduction Today there are over 800,000 robots in the world, mostly working in factory environment...
Fitting Parameterized ThreeDimensional Models to Images
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 1991
"... Modelbased recognition and motion tracking depends upon the ability to solve for projection and model parameters that will best fit a 3D model to matching 2D image features. This paper extends current methods of parameter solving to handle objects with arbitrary curved surfaces and with any nu ..."
Abstract

Cited by 343 (8 self)
 Add to MetaCart
Modelbased recognition and motion tracking depends upon the ability to solve for projection and model parameters that will best fit a 3D model to matching 2D image features. This paper extends current methods of parameter solving to handle objects with arbitrary curved surfaces and with any number of internal parameters representing articulations, variable dimensions, or surface deformations. Numerical
ModelBased Object Pose in 25 Lines of Code
 International Journal of Computer Vision
, 1995
"... In this paper, we describe a method for finding the pose of an object from a single image. We assume that we can detect and match in the image four or more noncoplanar feature points of the object, and that we know their relative geometry on the object. The method combines two algorithms ..."
Abstract

Cited by 238 (4 self)
 Add to MetaCart
(Show Context)
In this paper, we describe a method for finding the pose of an object from a single image. We assume that we can detect and match in the image four or more noncoplanar feature points of the object, and that we know their relative geometry on the object. The method combines two algorithms
Fast and Globally Convergent Pose Estimation From Video Images
, 1998
"... Determining the rigid transformation relating 2D images to known 3D geometry is a classical problem in photogrammetry and computer vision. Heretofore, the best methods for solving the problem have relied on iterative optimization methods which cannot be proven to converge and/or which do not effecti ..."
Abstract

Cited by 136 (5 self)
 Add to MetaCart
(Show Context)
Determining the rigid transformation relating 2D images to known 3D geometry is a classical problem in photogrammetry and computer vision. Heretofore, the best methods for solving the problem have relied on iterative optimization methods which cannot be proven to converge and/or which do not effectively account for the orthonormal structure of rotation matrices. We show that the pose estimation problem can be formulated as that of minimizing an error metric based on collinearity in object (as opposed to image) space. Using object space collinearity error, we derive an iterative algorithm which directly computes orthogonal rotation matrices and which is globally convergent. Experimentally, we show that the method is computationally efficient, that it is no less accurate than the best currently employed optimization methods, and that it outperforms all tested methods in robustness to outliers. ChienPing Lu, Silicon Graphics Inc. cplu@engr.sgi.com y Greg Hager, Department of Computer...
Realtime markerless tracking for augmented reality: the virtual visual servoing framework
 IEEE TRANS. ON VISUALIZATION AND COMPUTER GRAPHICS
, 2006
"... Tracking is a very important research subject in a realtime augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a realtime, robust, and efficient 3D modelbased tracking algorithm is proposed for ..."
Abstract

Cited by 108 (29 self)
 Add to MetaCart
(Show Context)
Tracking is a very important research subject in a realtime augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a realtime, robust, and efficient 3D modelbased tracking algorithm is proposed for a “video see through ” monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. Here, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of pointtocurves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide realtime tracking of points normal to the object contours. Robustness is obtained by integrating an Mestimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D modelfree augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.
Linear npoint camera pose determination
 ieee Transactions on Pattern Analysis and Machine Intelligence
, 1999
"... AbstractÐThe determination of camera position and orientation from known correspondences of 3D reference points and their images is known as pose estimation in computer vision and space resection in photogrammetry. It is wellknown that from three corresponding points there are at most four algebraic ..."
Abstract

Cited by 106 (2 self)
 Add to MetaCart
(Show Context)
AbstractÐThe determination of camera position and orientation from known correspondences of 3D reference points and their images is known as pose estimation in computer vision and space resection in photogrammetry. It is wellknown that from three corresponding points there are at most four algebraic solutions. Less appears to be known about the cases of four and five corresponding points. In this paper, we propose a family of linear methods that yield a unique solution to 4 and 5point pose determination for generic reference points. We first review the 3point algebraic method. Then we present our twostep, 4point and onestep, 5point linear algorithms. The 5point method can also be extended to handle more than five points. Finally, we demonstrate our methods on both simulated and real images. We show that they do not degenerate for coplanar configurations and even outperform the special linear algorithm for coplanar configurations in practice. Index TermsÐPose estimation, space resection, 2D3D image orientation, exterior orientation determination, perspectivenpointproblem, four points, five points. 1
Robust Methods for Estimating Pose and a Sensitivity Analysis
, 1994
"... This paper mathematically analyzes and proposes new solutions for the problem of estimat ing the camera 3D location and orientation (Pose Deter'migrations) from a matched set of 3D model and 2D image landmark features. Leastsquares techniques for line tokens, which minimize both rotation and ..."
Abstract

Cited by 97 (9 self)
 Add to MetaCart
This paper mathematically analyzes and proposes new solutions for the problem of estimat ing the camera 3D location and orientation (Pose Deter'migrations) from a matched set of 3D model and 2D image landmark features. Leastsquares techniques for line tokens, which minimize both rotation and translation simultaneously, are developed and shown to be far superior to the earlier techniques which solved for rotation first and then translation. However, leastsquares techniques fail catastrophically when outliers (or gross errors) are present in the match data. Outliers arise frequently due to incorrect correspondences or gross errors in the 3D model. Robust techniques for pose determination are developed to handle data contaminated by fewer than 50.0 % outliers.
Linear pose estimation from points or lines
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2003
"... ..."
(Show Context)
Affine Structure from Line Correspondences with Uncalibrated Affine Cameras
 IEEE Trans. Pattern Analysis and Machine Intelligence
, 1997
"... This paper presents a linear algorithm for recovering 3D affine shape and motion from line correspondences with uncalibrated affine cameras. The algorithm requires a minimum of seven line correspondences over three views. The key idea is the introduction of a onedimensional projective camera. This ..."
Abstract

Cited by 80 (9 self)
 Add to MetaCart
(Show Context)
This paper presents a linear algorithm for recovering 3D affine shape and motion from line correspondences with uncalibrated affine cameras. The algorithm requires a minimum of seven line correspondences over three views. The key idea is the introduction of a onedimensional projective camera. This converts 3D affine reconstruction of "line directions" into 2D projective reconstruction of "points". In addition, a linebased factorisation method is also proposed to handle redundant views. Experimental results both on simulated and real image sequences validate the robustness and the accuracy of the algorithm.
EPnP: An Accurate O(n) Solution to the PnP Problem
 INT J COMPUT VIS
, 2008
"... We propose a noniterative solution to the PnP problem—the estimation of the pose of a calibrated camera from n 3Dto2D point correspondences—whose computational complexity grows linearly with n. This is in contrast to stateoftheart methods that are O(n 5) or even O(n 8), without being more ac ..."
Abstract

Cited by 62 (4 self)
 Add to MetaCart
We propose a noniterative solution to the PnP problem—the estimation of the pose of a calibrated camera from n 3Dto2D point correspondences—whose computational complexity grows linearly with n. This is in contrast to stateoftheart methods that are O(n 5) or even O(n 8), without being more accurate. Our method is applicable for all n ≥ 4 and handles properly both planar and nonplanar configurations. Our central idea is to express the n 3D points as a weighted sum of four virtual control points. The problem then reduces to estimating the coordinates of these control points in the camera referential, which can be done in O(n) time by expressing these coordinates as weighted sum of the eigenvectors of a 12 × 12 matrix and solving a small constant number of quadratic equations to pick the right weights. Furthermore, if maximal precision is required, the output of the closedform solution can be used to initialize a GaussNewton scheme, which improves accuracy with negligible amount of additional time. The advantages of our method are demonstrated by thorough testing on both synthetic and realdata.