Results 1  10
of
65
HandEye Calibration Using Dual Quaternions
 THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
, 1999
"... To relate measurements made by a sensor mounted on a mechanical link to the robot’s coordinate frame, we must first estimate the transformation between these two frames. Many algorithms have been proposed for this socalled handeye calibration, but they do not treat the relative position and orient ..."
Abstract

Cited by 63 (0 self)
 Add to MetaCart
(Show Context)
To relate measurements made by a sensor mounted on a mechanical link to the robot’s coordinate frame, we must first estimate the transformation between these two frames. Many algorithms have been proposed for this socalled handeye calibration, but they do not treat the relative position and orientation in a unified way. In this paper, we introduce the use of dual quaternions, which are the algebraic counterpart of screws. Then we show how a line transformation can be written with the dualquaternion product. We algebraically prove that if we consider the camera and motor transformations as screws, then only the line coefficients of the screw axes are relevant regarding the handeye calibration. The dualquaternion parameterization facilitates a new simultaneous solution for the handeye rotation and translation using the singular value decomposition. Realworld performance is assessed directly in the application of handeye information for stereo reconstruction, as well as in the positioning of cameras. Both real and synthetic experiments show the superiority of the approach over two other proposed methods.
Application of Lie Algebras to Visual Servoing
 International Journal of Computer Vision
, 1999
"... A novel approach to visual servoing is presented, which takes advantage of the structure of the Lie algebra of ane transformations. The aim of this project is to use feedback from a visual sensor to guide a robot arm to a target position. The target position is learned using the principle of `teachi ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
(Show Context)
A novel approach to visual servoing is presented, which takes advantage of the structure of the Lie algebra of ane transformations. The aim of this project is to use feedback from a visual sensor to guide a robot arm to a target position. The target position is learned using the principle of `teaching by showing' in which the supervisor places the robot in the correct target position and the system captures the necessary information to be able to return to that position. The sensor is placed in the end e ector of the robot, the `camerainhand' approach, and thus provides direct feedback of the robot motion relative to the target scene via observed transformations of the scene. These scene transformations are obtained by measuring the ane deformations of a target planar contour (under the weak perspective assumption), captured by use of an active contour, or snake. Deformations of the snake are constrained using the Lie groups of ane and projective transformations. Properties of the ...
Observability of 3D Motion
 INTERNATIONAL JOURNAL OF COMPUTER VISION
, 2000
"... This paper examines the inherent difficulties in observing 3D rigid motion from image sequences. It does so without considering a particular estimator. Instead, it presents a statistical analysis of all the possible computational models which can be used for estimating 3D motion from an image sequen ..."
Abstract

Cited by 27 (14 self)
 Add to MetaCart
This paper examines the inherent difficulties in observing 3D rigid motion from image sequences. It does so without considering a particular estimator. Instead, it presents a statistical analysis of all the possible computational models which can be used for estimating 3D motion from an image sequence. These computational models are classified according to the mathematical constraints that they employ and the characteristics of the imaging sensor (restricted field of view and full field of view). Regarding the mathematical constraints, there exist two principles relating a sequence of images taken by a moving camera. One is the "epipolar constraint," applied to motion fields, and the other the "positive depth" constraint, applied to normal flow fields. 3D motion estimation amounts to optimizing these constraints over the image. A statistical modeling of these constraints leads to functions which are studied with regard to their topographic structure, specifically as regards the errors ...
Simultaneous robotworld and handeye calibration
 IEEE Transactions on Robotics and Automation
, 1998
"... Abstract—Recently, Zhuang et al. [1] proposed a method that allows simultaneous computation of the rigid transformations from world frame to robot base frame and from hand frame to camera frame. Their method attempts to solve a homogeneous matrix equation of the form AX = ZB. They use quaternions to ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
(Show Context)
Abstract—Recently, Zhuang et al. [1] proposed a method that allows simultaneous computation of the rigid transformations from world frame to robot base frame and from hand frame to camera frame. Their method attempts to solve a homogeneous matrix equation of the form AX = ZB. They use quaternions to derive explicit linear solutions for X and Z. In this short paper, we present two new solutions that attempt to solve the homogeneous matrix equation mentioned above: 1) a closedform method which uses quaternion algebra and a positive quadratic error function associated with this representation; 2) a method based on nonlinear constrained minimization and which simultaneously solves for rotations and translations. These results may be useful to other problems that can be formulated in the same mathematical form. We perform a sensitivity analysis for both our two methods and the linear method developed by Zhuang et al. [1]. This analysis allows the comparison of the three methods. In the light of this comparison, the nonlinear optimization method, which solves for rotations and translations simultaneously, seems to be the most stable one with respect to noise and to measurement errors. Fig. 1. Robot/world (Z) and hand/eye (X) calibration. The camera is mounted onto the gripper and camera motions are determined using a calibration pattern. The world frame is the frame of the calibration pattern. Index Terms—Hand/eye calibration, quaternion algebra, robot/world calibration. I.
Calibration of a multicamera rig from nonoverlapping views
 IN LECTURE NOTES IN COMPUTER SCIENCE 4713 (DAGM
, 2007
"... Abstract. A simple, stable and generic approach for estimation of relative positions and orientations of multiple rigidly coupled cameras is presented in this paper. The algorithm does not impose constraints on the field of view of the cameras and works even in the extreme case when the sequences fr ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
Abstract. A simple, stable and generic approach for estimation of relative positions and orientations of multiple rigidly coupled cameras is presented in this paper. The algorithm does not impose constraints on the field of view of the cameras and works even in the extreme case when the sequences from the different cameras are totally disjoint (i.e. when no part of the scene is captured by more than one camera). The influence of the rig motion on the existence of a unique solution is investigated and degenerate rig motions are identified. Each camera captures an individual sequence which is afterwards processed by a structure and motion (SAM) algorithm resulting in positions and orientations for each camera. The unknown relative transformations between the rigidly coupled cameras are estimated utilizing the rigidity constraint of the rig. 1
Visual Servoing of Robot Manipulators  Part I: Projective Kinematics
 INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
, 1999
"... Visual servoing of robot manipulators is a key technique where the appearance of an object in the image plane is usedtocontrol the velocity of the endeffector such that the desired position is reached in the scene. The vast majority of visual servoing methods proposed so far uses calibrated robots ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
(Show Context)
Visual servoing of robot manipulators is a key technique where the appearance of an object in the image plane is usedtocontrol the velocity of the endeffector such that the desired position is reached in the scene. The vast majority of visual servoing methods proposed so far uses calibrated robots in conjunction with calibrated cameras. It has been shown that the behavior of visual control loops does not degrade too much in the presence of calibration errors. Nevertheless, camera and robot calibration are complex and timeconsuming processes requiring specialpurpose mechanical devices, such as theodolites and calibration jigs. In this paper
RES: computing the interactions between real and virtual objects in video sequences
 In Proceedings of the IEEE Workshop on Networked Realities
, 1995
"... Possibilities for dynamic interactions of people with machines created by combination of virtual reality and communication networking provide new interesting problems at the intersection of two domains (among others): Computer Vision and Computer Graphics. In this paper, a technical solution to one ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Possibilities for dynamic interactions of people with machines created by combination of virtual reality and communication networking provide new interesting problems at the intersection of two domains (among others): Computer Vision and Computer Graphics. In this paper, a technical solution to one of these problems is presented to automate the mixing of real and synthetic objects in a same animated video sequence. Current approaches usually involve mainly 2Dbased effects and rely heavily on human expertise and interaction. We aim at achieving a close binding between 3Dbased analysis and synthesis techniques to compute the interaction between a real scene captured in a sequence of calibrated images, and a computergenerated environment.
K.: Automatic alignment of a camera with a line scan lidar system
 In: Proc. IEEE Int. Conf. Robot. Autom
, 2011
"... Abstract — We propose a new method for extrinsic calibration of a linescan LIDAR with a perspective projection camera. Our method is a closedform, minimal solution to the problem. The solution is a symbolic template found via variable elimination and the multipolynomial Macaulay resultant. It doe ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
Abstract — We propose a new method for extrinsic calibration of a linescan LIDAR with a perspective projection camera. Our method is a closedform, minimal solution to the problem. The solution is a symbolic template found via variable elimination and the multipolynomial Macaulay resultant. It does not require initialization, and can be used in an automatic calibration setting when paired with RANSAC and leastsquares refinement. We show the efficacy of our approach through a set of simulations and a real calibration. I.
Online HandEye Calibration
, 1999
"... In this paper, we address the problem of handeye calibration of a robot mounted video camera. In a first time, we derive a new linear formulation of the problem. This allows an algebraic analysis of the cases that usual approaches do not consider. In a second time, we extend this new formulation in ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
In this paper, we address the problem of handeye calibration of a robot mounted video camera. In a first time, we derive a new linear formulation of the problem. This allows an algebraic analysis of the cases that usual approaches do not consider. In a second time, we extend this new formulation into an online handeye calibration method. This method allows to get rid of the calibration object required by the standard approaches and use unknown scenes instead. Finally, experimental results validate both methods.
HandEye Calibration in terms of motion of lines using Geometric Algebra
 In Proc. of the 10th Scandinavian Conference on Image Analysis SCIA'97
, 1997
"... In this paper we will show that the Clifford or geometric algebra is very well suited for the representation and manipulation of geometric objects useful in computer vision and kinematics and also that the computer implementations are straightforward. The power of this approach will be shown by the ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
(Show Context)
In this paper we will show that the Clifford or geometric algebra is very well suited for the representation and manipulation of geometric objects useful in computer vision and kinematics and also that the computer implementations are straightforward. The power of this approach will be shown by the analysis of the geometry and algebra and optimal solution of the handeye calibration problem. The robustness of the algorithm is experimentally compared with classical approaches. Categories: Computer vision; robotics; Clifford algebra; geometric algebra; rotors; motors; screws; handeye calibration. 1 Introduction Geometric algebra is a coordinatefree approach to geometry based on the algebras of Grassmann and Clifford. The algebra is defined on a space whose elements are called multivectors; a multivector is a linear combination of objects of different type, e.g. scalars and vectors. It has an associative and noncommutative product called the geometric or Clifford product. The existen...