Results 1  10
of
103
A Tutorial on Visual Servo Control
 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION
, 1996
"... This paper provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review ..."
Abstract

Cited by 825 (25 self)
 Add to MetaCart
This paper provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process. We then present a taxonomy of visual servo control systems. The two major classes of systems, positionbased and imagebased systems, are then discussed. Since any visual servo system must be capable of tracking image features in a sequence of images, we include an overview of featurebased and correlationbased methods for tracking. We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control.
Fitting Parameterized ThreeDimensional Models to Images
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 1991
"... Modelbased recognition and motion tracking depends upon the ability to solve for projection and model parameters that will best fit a 3D model to matching 2D image features. This paper extends current methods of parameter solving to handle objects with arbitrary curved surfaces and with any nu ..."
Abstract

Cited by 360 (8 self)
 Add to MetaCart
Modelbased recognition and motion tracking depends upon the ability to solve for projection and model parameters that will best fit a 3D model to matching 2D image features. This paper extends current methods of parameter solving to handle objects with arbitrary curved surfaces and with any number of internal parameters representing articulations, variable dimensions, or surface deformations. Numerical
ModelBased Object Pose in 25 Lines of Code
 International Journal of Computer Vision
, 1995
"... In this paper, we describe a method for finding the pose of an object from a single image. We assume that we can detect and match in the image four or more noncoplanar feature points of the object, and that we know their relative geometry on the object. The method combines two algorithms ..."
Abstract

Cited by 254 (5 self)
 Add to MetaCart
(Show Context)
In this paper, we describe a method for finding the pose of an object from a single image. We assume that we can detect and match in the image four or more noncoplanar feature points of the object, and that we know their relative geometry on the object. The method combines two algorithms
Fast and Globally Convergent Pose Estimation From Video Images
, 1998
"... Determining the rigid transformation relating 2D images to known 3D geometry is a classical problem in photogrammetry and computer vision. Heretofore, the best methods for solving the problem have relied on iterative optimization methods which cannot be proven to converge and/or which do not effecti ..."
Abstract

Cited by 152 (6 self)
 Add to MetaCart
Determining the rigid transformation relating 2D images to known 3D geometry is a classical problem in photogrammetry and computer vision. Heretofore, the best methods for solving the problem have relied on iterative optimization methods which cannot be proven to converge and/or which do not effectively account for the orthonormal structure of rotation matrices. We show that the pose estimation problem can be formulated as that of minimizing an error metric based on collinearity in object (as opposed to image) space. Using object space collinearity error, we derive an iterative algorithm which directly computes orthogonal rotation matrices and which is globally convergent. Experimentally, we show that the method is computationally efficient, that it is no less accurate than the best currently employed optimization methods, and that it outperforms all tested methods in robustness to outliers. ChienPing Lu, Silicon Graphics Inc. cplu@engr.sgi.com y Greg Hager, Department of Computer...
Linear pose estimation from points or lines
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2003
"... ..."
EPnP: An Accurate O(n) Solution to the PnP Problem
 INT J COMPUT VIS
, 2008
"... We propose a noniterative solution to the PnP problem—the estimation of the pose of a calibrated camera from n 3Dto2D point correspondences—whose computational complexity grows linearly with n. This is in contrast to stateoftheart methods that are O(n 5) or even O(n 8), without being more ac ..."
Abstract

Cited by 70 (4 self)
 Add to MetaCart
We propose a noniterative solution to the PnP problem—the estimation of the pose of a calibrated camera from n 3Dto2D point correspondences—whose computational complexity grows linearly with n. This is in contrast to stateoftheart methods that are O(n 5) or even O(n 8), without being more accurate. Our method is applicable for all n ≥ 4 and handles properly both planar and nonplanar configurations. Our central idea is to express the n 3D points as a weighted sum of four virtual control points. The problem then reduces to estimating the coordinates of these control points in the camera referential, which can be done in O(n) time by expressing these coordinates as weighted sum of the eigenvectors of a 12 × 12 matrix and solving a small constant number of quadratic equations to pick the right weights. Furthermore, if maximal precision is required, the output of the closedform solution can be used to initialize a GaussNewton scheme, which improves accuracy with negligible amount of additional time. The advantages of our method are demonstrated by thorough testing on both synthetic and realdata.
Object pose: The link between weak perspective, paraperspective and full perspective
 International Journal of Computer Vision
, 1997
"... Abstract. Recently, DeMenthon and Davis (1992, 1995) proposed a method for determining the pose of a 3D object with respect to a camera from 3D to 2D point correspondences. The method consists of iteratively improving the pose computed with a weak perspective camera model to converge, at the limi ..."
Abstract

Cited by 67 (7 self)
 Add to MetaCart
(Show Context)
Abstract. Recently, DeMenthon and Davis (1992, 1995) proposed a method for determining the pose of a 3D object with respect to a camera from 3D to 2D point correspondences. The method consists of iteratively improving the pose computed with a weak perspective camera model to converge, at the limit, to a pose estimation computed with a perspective camera model. In this paper we give an algebraic derivation of DeMenthon and Davis ’ method and we show that it belongs to a larger class of methods where the perspective camera model is approximated either at zero order (weak perspective) or first order (paraperspective). We describe in detail an iterative paraperspective pose computation method for both non coplanar and coplanar object points. We analyse the convergence of these methods and we conclude that the iterative paraperspective method (proposed in this paper) has better convergence properties than the iterative weak perspective method. We introduce a simple way of taking into account the orthogonality constraint associated with the rotation matrix. We analyse the sensitivity to camera calibration errors and we define the optimal experimental setup with respect to imprecise camera calibration. We compare the results obtained with this method and with a nonlinear optimization method.
Dynamic registration correction in videobased augmented reality systems
 IEEE Computer Graphics and Applications
, 1995
"... ..."
Visual servoing of an underactuated dynamic rigidbody system: an image based approach
 IEEE Trans. on Robotics and Automation
, 2002
"... Abstract—A new imagebased control strategy for visual servoing of a class of underactuated rigid body systems is presented. The proposed control design applies to “eyeinhand ” systems where the camera is fixed to a rigid body with actuated dynamics. The control design is motivated by a theoreti ..."
Abstract

Cited by 57 (18 self)
 Add to MetaCart
(Show Context)
Abstract—A new imagebased control strategy for visual servoing of a class of underactuated rigid body systems is presented. The proposed control design applies to “eyeinhand ” systems where the camera is fixed to a rigid body with actuated dynamics. The control design is motivated by a theoretical analysis of the dynamic equations of motion of a rigid body and exploits passivitylike properties of these dynamics to derive a Lyapunov control algorithm using robust backstepping techniques. The proposed control is novel in considering the full dynamic system incorporating all degrees of freedom (albeit for a restricted class of dynamics) and in not requiring measurement of the relative depths of the observed image points. A motivating application is the stabilization of a scale model autonomous helicopter over a marked landing pad. Index Terms—Imagebased visual servo (IBVS), nonlinear control, rigidbody dynamics, underactuated systems. I.
Accurate NonIterative O(n) Solution to the PnP Problem
, 2007
"... We propose a noniterative solution to the PnP problem—the estimation of the pose of a calibrated camera from n 3Dto2D point correspondences—whose computational complexity grows linearly with n. This is in contrast to stateoftheart methods that are O(n 5) or even O(n 8), without being more accu ..."
Abstract

Cited by 56 (8 self)
 Add to MetaCart
(Show Context)
We propose a noniterative solution to the PnP problem—the estimation of the pose of a calibrated camera from n 3Dto2D point correspondences—whose computational complexity grows linearly with n. This is in contrast to stateoftheart methods that are O(n 5) or even O(n 8), without being more accurate. Our method is applicable for all n ≥ 4 and handles properly both planar and nonplanar configurations. Our central idea is to express the n 3D points as a weighted sum of four virtual control points. The problem then reduces to estimating the coordinates of these control points in the camera referential, which can be done in O(n) time by expressing these coordinates as weighted sum of the eigenvectors of a 12 × 12 matrix and solving a small constant number of quadratic equations to pick the right weights. The advantages of our method are demonstrated by thorough testing on both synthetic and realdata.