Results 1  10
of
72
A Tutorial on Visual Servo Control
 IEEE Transactions on Robotics and Automation
, 1996
"... This paper provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review ..."
Abstract

Cited by 600 (21 self)
 Add to MetaCart
This paper provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process. We then present a taxonomy of visual servo control systems. The two major classes of systems, positionbased and imagebased systems, are then discussed. Since any visual servo system must be capable of tracking image features in a sequence of images, we include an overview of featurebased and correlationbased methods for tracking. We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control. 1 Introduction Today there are over 800,000 robots in the world, mostly working in factory environment...
Visual Control Of Robot Manipulators  A Review
 Visual Servoing
, 1994
"... This paper attempts to present a comprehensive summary of research results in the use of visual information to control robot manipulators and related mechanisms. An extensive bibliography is provided which also includes important papers from the elemental disciplines upon which visual servoing is ba ..."
Abstract

Cited by 49 (1 self)
 Add to MetaCart
This paper attempts to present a comprehensive summary of research results in the use of visual information to control robot manipulators and related mechanisms. An extensive bibliography is provided which also includes important papers from the elemental disciplines upon which visual servoing is based. The research results are discussed in terms of historical context, commonality of function, algorithmic approach and method of implementation. 1 Introduction This paper presents the history, and reviews current research into the use of visual information for the control of robot manipulators and mechanisms. Visual control of manipulators promises substantial advantages when working with targets whose position is unknown, or with manipulators which may be flexible or inaccurate. The reported use of visual information to guide robots, or more generally mechanisms, is quite extensive and encompasses manufacturing applications, teleoperation, missile tracking cameras, fruit picking as well...
Selfcalibration of rotating and zooming cameras
 International Journal of Computer Vision
, 2001
"... Abstract. In this paper we describe the theory and practice ofselfcalibration ofcameras which are fixed in location and may freely rotate while changing their internal parameters by zooming. The basis ofour approach is to make use ofthe socalled infinite homography constraint which relates the unk ..."
Abstract

Cited by 49 (6 self)
 Add to MetaCart
Abstract. In this paper we describe the theory and practice ofselfcalibration ofcameras which are fixed in location and may freely rotate while changing their internal parameters by zooming. The basis ofour approach is to make use ofthe socalled infinite homography constraint which relates the unknown calibration matrices to the computed interimage homographies. In order for the calibration to be possible some constraints must be placed on the internal parameters ofthe camera. We present various selfcalibration methods. First an iterative nonlinear method is described which is very versatile in terms ofthe constraints that may be imposed on the camera calibration: each ofthe camera parameters may be assumed to be known, constant throughout the sequence but unknown, or free to vary. Secondly, we describe a fast linear method which works under the minimal assumption of zero camera skew or the more restrictive conditions ofsquare pixels (zero skew and known aspect ratio) or known principal point. We show experimental results on both synthetic and real image sequences (where ground truth data was available) to assess the accuracy and the stability ofthe algorithms and to compare the result ofapplying different constraints on the camera parameters. We also derive an optimal Maximum Likelihood estimator for the calibration and the motion parameters. Prior knowledge about the distribution ofthe estimated parameters (such as the location ofthe principal point) may also be incorporated via Maximum a Posteriori estimation. We then identify some nearambiguities that arise under rotational motions showing that coupled changes ofcertain parameters are barely observable making them indistinguishable. Finally we study the negative effect ofradial distortion in the selfcalibration process and point out some possible solutions to it. 1.
Relative pose calibration between visual and inertial sensors
 International Journal of Robotics Research, Special Issue 2nd Workshop on Integration of Vision and Inertial Sensors, 26:561–575, 2007. Luiz Gustavo Bizarro
, 2009
"... Abstract — This paper proposes an approach to calibrate offtheshelf cameras and inertial sensors to have a useful integrated system to be used in static and dynamic situations. The rotation between the camera and the inertial sensor can be estimated, when calibrating the camera, by having both sen ..."
Abstract

Cited by 47 (15 self)
 Add to MetaCart
Abstract — This paper proposes an approach to calibrate offtheshelf cameras and inertial sensors to have a useful integrated system to be used in static and dynamic situations. The rotation between the camera and the inertial sensor can be estimated, when calibrating the camera, by having both sensors observe the vertical direction, using a vertical chessboard target and gravity. The translation between the two can be estimated using a simple passive turntable and static images, provided that the system can be adjusted to turn about the inertial sensor null point in several poses. Simulation and real data results are presented to show the validity and simple requirements of the proposed method. Index Terms — computer vision, inertial sensors, sensor fusion, calibration. I.
HandEye Calibration
"... Whenever a sensor is mounted on a robot hand it is important toknow the relationship between the sensor and the hand. The problem of determining this relationship is referred to as the handeye calibration problem. Handeye calibration is important in at least two types of tasks: (i) map sensor ce ..."
Abstract

Cited by 45 (9 self)
 Add to MetaCart
Whenever a sensor is mounted on a robot hand it is important toknow the relationship between the sensor and the hand. The problem of determining this relationship is referred to as the handeye calibration problem. Handeye calibration is important in at least two types of tasks: (i) map sensor centered measurements into the robot workspace frame and (ii) allow the robot to precisely move the sensor. In the past some solutions were proposed in the particular case of the sensor being a TV camera. With almost no exception, all existing solutions attempt to solve a homogeneous matrix equation of the form AX = XB. This paper has the following main contributions. First we show that there are two possible formulations of the handeye calibration problem. One formulation is the classical one that we just mentioned. A second formulation takes the form of the following homogeneous matrix equation: MY = M 0 YB. The advantage of the latter formulation is that the extrinsic and intrinsic parameters of the camera need not be made explicit. Indeed, this formulation directly uses the 3 4 perspective matrices (M and M 0) associated with 2 positions of the camera with respect to the calibration frame. Moreover, this formulation together with the classical one cover a wider range of camerabased sensors to be calibrated with respect to the robot hand: single scanline cameras, stereo heads, range nders, etc. Second, we develop a common mathematical framework to solve for the handeye calibration problem using either of the two formulations. We represent rotation by a unit quaternion. We present two methods, (i) a closedform solution for solving for rotation using unit quaternions and then solving for translation and (ii) a nonlinear technique for simultaneously solving for rotation and translation. Third, we perform a stability analysis both for our two methods and for the classical linear method developed by Tsai & Lenz [TL89]. This analysis allows the comparison of the three methods. In the light of this comparison, the nonlinear optimization method, that solves for rotation and translation simultaneously, seems to be the most robust one with respect to noise and to measurement errors.
A Kalman Filterbased Algorithm for IMUCamera Calibration
, 2007
"... Visionaided Inertial Navigation Systems (VINS) can provide precise state estimates for the 3D motion of a vehicle when no external references (e.g., GPS) are available. This is achieved by combining inertial measurements from an IMU with visual observations from a camera under the assumption that ..."
Abstract

Cited by 38 (8 self)
 Add to MetaCart
Visionaided Inertial Navigation Systems (VINS) can provide precise state estimates for the 3D motion of a vehicle when no external references (e.g., GPS) are available. This is achieved by combining inertial measurements from an IMU with visual observations from a camera under the assumption that the rigid transformation between the two sensors is known. Errors in the IMUcamera calibration process causes biases that reduce the accuracy of the estimation process and can even lead to divergence. In this paper, we present a Kalman filterbased algorithm for precisely determining the unknown transformation between a camera and an IMU. Contrary to previous approaches, we explicitly account for the time correlations of the IMU measurements and provide a figure of merit (covariance) for the estimated transformation. The proposed method does not require any special hardware (such as spin table or 3D laser scanner) except a calibration target. Simulation and experimental results are presented that validate the proposed method and quantify its accuracy.
Motion of an Uncalibrated Stereo Rig: SelfCalibration and Metric Reconstruction
 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION
, 1993
"... We address in this paper the problem of selfcalibration and metric reconstruction (up to a scale) from one unknown motion of an uncalibrated stereo rig, assuming the coordinates of the principal point of each camera are known (This assumption is not necessary if one more motion is available). The e ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
We address in this paper the problem of selfcalibration and metric reconstruction (up to a scale) from one unknown motion of an uncalibrated stereo rig, assuming the coordinates of the principal point of each camera are known (This assumption is not necessary if one more motion is available). The epipolar constraint is first formulated for two uncalibrated images. The problem then becomes one of estimating unknowns such that the discrepancy from the epipolar constraint, in terms of distances between points and their corresponding epipolar lines, is minimized. The initialization of the unknowns is based on the work of Maybank, Luong and Faugeras on selfcalibration of a single moving camera, which requires to solve a set of socalled Kruppa equations. Redundancy of the information contained in a sequence of stereo images makes this method more robust than using a sequence of monocular images. Real data have been used to test the proposed method, and the results obtained are quite good.
HandEye Calibration Using Dual Quaternions
 THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
, 1999
"... To relate measurements made by a sensor mounted on a mechanical link to the robot’s coordinate frame, we must first estimate the transformation between these two frames. Many algorithms have been proposed for this socalled handeye calibration, but they do not treat the relative position and orient ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
To relate measurements made by a sensor mounted on a mechanical link to the robot’s coordinate frame, we must first estimate the transformation between these two frames. Many algorithms have been proposed for this socalled handeye calibration, but they do not treat the relative position and orientation in a unified way. In this paper, we introduce the use of dual quaternions, which are the algebraic counterpart of screws. Then we show how a line transformation can be written with the dualquaternion product. We algebraically prove that if we consider the camera and motor transformations as screws, then only the line coefficients of the screw axes are relevant regarding the handeye calibration. The dualquaternion parameterization facilitates a new simultaneous solution for the handeye rotation and translation using the singular value decomposition. Realworld performance is assessed directly in the application of handeye information for stereo reconstruction, as well as in the positioning of cameras. Both real and synthetic experiments show the superiority of the approach over two other proposed methods.
Visual Servoing from Lines
, 2000
"... In this work, we present a fundamentally new approach to visual servoing using lines. It is based on a theoretical and geometrical study of the main line representations which allowed us to define a new representation, the socalled binormalized Plücker coordinates. They are particularly well suited ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
In this work, we present a fundamentally new approach to visual servoing using lines. It is based on a theoretical and geometrical study of the main line representations which allowed us to define a new representation, the socalled binormalized Plücker coordinates. They are particularly well suited to visual servoing. Indeed, they allow the definition of a proper image line alignment notion. The control law which realizes such an alignment has moreover several properties: partial decoupling between rotation and translation, analytical inversion of the motion equations and global asymptotic stability conditions. This control law was validated both in simulation and experimentally in the specific case of an orthogonal trihedron.