Results 1  10
of
97
A Tutorial on Visual Servo Control
 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION
, 1996
"... This paper provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review ..."
Abstract

Cited by 823 (25 self)
 Add to MetaCart
(Show Context)
This paper provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process. We then present a taxonomy of visual servo control systems. The two major classes of systems, positionbased and imagebased systems, are then discussed. Since any visual servo system must be capable of tracking image features in a sequence of images, we include an overview of featurebased and correlationbased methods for tracking. We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control.
Flexible camera calibration by viewing a plane from unknown orientations
, 1999
"... We propose a flexible new technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled ..."
Abstract

Cited by 510 (7 self)
 Add to MetaCart
(Show Context)
We propose a flexible new technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled. The proposed procedure consists of a closedform solution, followed by a nonlinear refinement based on the maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique, and very good results have been obtained. Compared with classical techniques which use expensive equipment such as two or three orthogonal planes, the proposed technique is easy to use and flexible. It advances 3D computer vision one step from laboratory environments to real world use. The corresponding software is available from the author’s Web page.
Linear Pushbroom Cameras
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1994
"... Modelling th# push broom sensors commonly used in satellite imagery is quite di#cult and computationally intensive due to th# complicated motion ofth# orbiting satellite with respect to th# rotating earth# In addition, th# math#46 tical model is quite complex, involving orbital dynamics, andh#(0k is ..."
Abstract

Cited by 169 (6 self)
 Add to MetaCart
(Show Context)
Modelling th# push broom sensors commonly used in satellite imagery is quite di#cult and computationally intensive due to th# complicated motion ofth# orbiting satellite with respect to th# rotating earth# In addition, th# math#46 tical model is quite complex, involving orbital dynamics, andh#(0k is di#cult to analyze. Inth#A paper, a simplified model of apush broom sensor(th# linear push broom model) is introduced. Ith as th e advantage of computational simplicity wh#A9 atth# same time giving very accurate results compared with th# full orbitingpush broom model. Meth# ds are given for solving th# major standardph# togrammetric problems for th e linear push broom sensor. Simple noniterative solutions are given for th# following problems : computation of th# model parameters from groundcontrol points; determination of relative model parameters from image correspondences between two images; scene reconstruction given image correspondences and groundcontrol points. In addition, th# linearpush broom model leads toth#0 retical insigh ts th# t will be approximately valid for th# full model as well.Th# epipolar geometry of linear push broom cameras in investigated and sh own to be totally di#erent from th at of a perspective camera. Neverth eless, a matrix analogous to th e essential matrix of perspective cameras issh own to exist for linear push broom sensors. Fromth#0 it is sh# wn th# t a scene is determined up to an a#ne transformation from two viewswith linearpush broom cameras. Keywords :push broom sensor, satellite image, essential matrixph# togrammetry, camera model The research describ ed in this paper hasb een supportedb y DARPA Contract #MDA97291 C0053 1 Real Push broom sensors are commonly used in satellite cameras, notably th# SPOT satellite forth# generatio...
Fast and Globally Convergent Pose Estimation From Video Images
, 1998
"... Determining the rigid transformation relating 2D images to known 3D geometry is a classical problem in photogrammetry and computer vision. Heretofore, the best methods for solving the problem have relied on iterative optimization methods which cannot be proven to converge and/or which do not effecti ..."
Abstract

Cited by 152 (6 self)
 Add to MetaCart
Determining the rigid transformation relating 2D images to known 3D geometry is a classical problem in photogrammetry and computer vision. Heretofore, the best methods for solving the problem have relied on iterative optimization methods which cannot be proven to converge and/or which do not effectively account for the orthonormal structure of rotation matrices. We show that the pose estimation problem can be formulated as that of minimizing an error metric based on collinearity in object (as opposed to image) space. Using object space collinearity error, we derive an iterative algorithm which directly computes orthogonal rotation matrices and which is globally convergent. Experimentally, we show that the method is computationally efficient, that it is no less accurate than the best currently employed optimization methods, and that it outperforms all tested methods in robustness to outliers. ChienPing Lu, Silicon Graphics Inc. cplu@engr.sgi.com y Greg Hager, Department of Computer...
ThroughtheLens Camera Control
, 1992
"... In this paper we introduce throughthelens camera control, a body of techniques that permit a user to manipulate a virtual camera by controlling and constraining features in the image seen through its lens. Rather than solving for camera parameters directly, constrained optimization is used to com ..."
Abstract

Cited by 147 (7 self)
 Add to MetaCart
(Show Context)
In this paper we introduce throughthelens camera control, a body of techniques that permit a user to manipulate a virtual camera by controlling and constraining features in the image seen through its lens. Rather than solving for camera parameters directly, constrained optimization is used to compute their time derivatives based on desired changes in userdefined controls. This effectively permits new controls to be defined independent of the underlying parameterization. The controls can also serve as constraints, maintaining their values as others are changed. We describe the techniques in general and work through a detailed example of a specific camera model. Our implementation demonstrates a gallery of useful controls and constraints and provides some examples of how these may be used in composing images and animations.
Linear npoint camera pose determination
 ieee Transactions on Pattern Analysis and Machine Intelligence
, 1999
"... AbstractÐThe determination of camera position and orientation from known correspondences of 3D reference points and their images is known as pose estimation in computer vision and space resection in photogrammetry. It is wellknown that from three corresponding points there are at most four algebraic ..."
Abstract

Cited by 119 (2 self)
 Add to MetaCart
(Show Context)
AbstractÐThe determination of camera position and orientation from known correspondences of 3D reference points and their images is known as pose estimation in computer vision and space resection in photogrammetry. It is wellknown that from three corresponding points there are at most four algebraic solutions. Less appears to be known about the cases of four and five corresponding points. In this paper, we propose a family of linear methods that yield a unique solution to 4 and 5point pose determination for generic reference points. We first review the 3point algebraic method. Then we present our twostep, 4point and onestep, 5point linear algorithms. The 5point method can also be extended to handle more than five points. Finally, we demonstrate our methods on both simulated and real images. We show that they do not degenerate for coplanar configurations and even outperform the special linear algorithm for coplanar configurations in practice. Index TermsÐPose estimation, space resection, 2D3D image orientation, exterior orientation determination, perspectivenpointproblem, four points, five points. 1
Realtime markerless tracking for augmented reality: the virtual visual servoing framework
 IEEE TRANS. ON VISUALIZATION AND COMPUTER GRAPHICS
, 2006
"... Tracking is a very important research subject in a realtime augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a realtime, robust, and efficient 3D modelbased tracking algorithm is proposed for ..."
Abstract

Cited by 114 (29 self)
 Add to MetaCart
(Show Context)
Tracking is a very important research subject in a realtime augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a realtime, robust, and efficient 3D modelbased tracking algorithm is proposed for a “video see through ” monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. Here, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of pointtocurves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide realtime tracking of points normal to the object contours. Robustness is obtained by integrating an Mestimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D modelfree augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.
Robust Methods for Estimating Pose and a Sensitivity Analysis
, 1994
"... This paper mathematically analyzes and proposes new solutions for the problem of estimat ing the camera 3D location and orientation (Pose Deter'migrations) from a matched set of 3D model and 2D image landmark features. Leastsquares techniques for line tokens, which minimize both rotation and ..."
Abstract

Cited by 100 (10 self)
 Add to MetaCart
This paper mathematically analyzes and proposes new solutions for the problem of estimat ing the camera 3D location and orientation (Pose Deter'migrations) from a matched set of 3D model and 2D image landmark features. Leastsquares techniques for line tokens, which minimize both rotation and translation simultaneously, are developed and shown to be far superior to the earlier techniques which solved for rotation first and then translation. However, leastsquares techniques fail catastrophically when outliers (or gross errors) are present in the match data. Outliers arise frequently due to incorrect correspondences or gross errors in the 3D model. Robust techniques for pose determination are developed to handle data contaminated by fewer than 50.0 % outliers.
Linear pose estimation from points or lines
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2003
"... ..."
(Show Context)
Camera Calibration with OneDimensional Objects
, 2004
"... Camera calibration has been studied extensively in computer vision and photogrammetry and the proposed techniques in the literature include those using 3D apparatus (two or three planes orthogonal to each other or a plane undergoing a pure translation, etc.), 2D objects (planar patterns undergoing ..."
Abstract

Cited by 71 (1 self)
 Add to MetaCart
Camera calibration has been studied extensively in computer vision and photogrammetry and the proposed techniques in the literature include those using 3D apparatus (two or three planes orthogonal to each other or a plane undergoing a pure translation, etc.), 2D objects (planar patterns undergoing unknown motions), and 0D features (selfcalibration using unknown scene points). Yet, this paper proposes a new calibration technique using 1D objects (points aligned on a line), thus filling the missing dimension in calibration. In particular, we show that camera calibration is not possible with freemoving 1D objects, but can be solved if one point is fixed. A closedform solution is developed if six or more observations of such a 1D object are made. For higher accuracy, a nonlinear technique based on the maximum likelihood criterion is then used to refine the estimate. Singularities have also been studied. Besides the theoretical aspect, the proposed technique is also important in practice especially when calibrating multiple cameras mounted apart from each other, where the calibration objects are required to be visible simultaneously.