### Table 3: Comparison with a robust estimation of the hand-eye transformation

2000

"... In PAGE 37: ... Nevertheless, it remains in an acceptable ratio since the relative error in translation is close to 3%. To balance the lack of ground-truth, we also compared the results ob- tained in this experiment to the robust estimation described in Experiment1 ( Table3 ). This comparison con rms the accuracy of both the linear method and the self-calibration scheme.... ..."

### Table 2: Comparison with a robust estimation of the hand-eye transformation

"... In PAGE 35: ... Then, we com- pared the results obtained above to this robust estimation. We gather the errors in Table2 . It con rms that the linear method is numerically very e - cient, especially as far as rotation is concerned.... ..."

### Table 2: Comparison with a robust estimation of the hand-eye transformation

2000

"... In PAGE 35: ... Then, we com- pared the results obtained above to this robust estimation. We gather the errors in Table2 . It con rms that the linear method is numerically very e - cient, especially as far as rotation is concerned.... ..."

### Table 3. Comparison with a Reference Estimation of the Hand-Eye Transformation

"... In PAGE 17: ... Neverthe- less, it remains in an acceptable ratio since the relative error in translation is close to 3%. We also compared the results obtained in this experiment to the reference estimation ( Table3 ). This comparison confirms the accuracy of both the linear method and the self-calibration scheme.... ..."

### Table 1. Mean errors in rotation and translation of relative eye movements computed with differ- ent hand-eye calibration methods using structure-from-motion as a basis.

"... In PAGE 6: ... The errors are computed by averaging over a set of randomly selected relative movements. Table1 shows residual errors in translation and rotation as well as the computation times for hand-eye calibration on a Linux PC (Athlon XP2600+) including data selec- tion, but not feature tracking and 3-D reconstruction. The latter steps are the same for all methods, and take approximately 90 sec for tracking and 200 sec for 3-D reconstruction.... In PAGE 7: ... After feature tracking and 3-D reconstruction, different hand-eye calibra- tion methods have been evaluated; in all cases the reconstructed camera movement has been used as eye-data. The results shown in Table1 were computed as follows: DQ, scale sep.: Here, the scale factor was estimated rst by solving (4) and (5).... ..."

### Table 1: Mean errors in rotation and translation of relative eye movements and errors for the hand-eye transformation w.r.t. ground truth for the synthetical data set as well as computation time

2004

"... In PAGE 5: ... In the following, the real data sets are denoted by Real 1 (270 frames) and Real 2 (190 frames), the synthetic one by Synth (108 frames). Table1 shows errors after hand-eye calibration and computation times for the different methods which have been applied to each data set. Since no ground truth is available when calibrating real data, we cannot give errors between the real hand- eye transformation and the computed one.... In PAGE 6: ... The codebook sizes were: 600 (Real 1), 1100 (Real 2), and 500 (Synth). The last three rows of Table1 show the errors for the hand-eye transformation, since for the synthet- ically generated data set ground truth information was available.... ..."

Cited by 1

### Table 2. Comparison with a Reference Estimation of the Hand-Eye Transformation

"... In PAGE 17: ... Then, we compared the estimations obtained above to the reference estimation. We gathered the errors in Table2 . It confirms that the linear method is numerically very efficient, especially as far as rotation is concerned.... ..."

### Table 1: This table summarizes the results of camera calibration obtained with a classical o -line solution (the rst seven rows) and with the method described in this paper (the last row). The extrinsic parameters of the projection matrix M obtained by self calibration are associated with the hand/eye parameters. We compared these parameters with those obtained by calibrating the hand/eye relationship using the classical approach, i.e., solving an homogeneous matrix equation of the form AX = XB. The rotational parameters obtained with our method are exactly the same as the parameters obtained with the classical hand/eye calibration method. We noticed a discrepancy between the two sets of translational parameters. This discrepancy is of the order of a few (3 to 4) centimeters, the distance between the camera origin and the hand origin being of approximatively 10 centimeters. We have not yet been able to properly analyse the source of this discrepancy. It is most probably due to errors in the robot apos;s o sets which don apos;t a ect the rotational parameters but do a ect the translational ones. In the classical approach, Euclidean information is provided by a carefully machined calibration grid. In our approach the Euclidean information is provided by robot motions which are less precise. This loss of

1995

Cited by 4

### Table 1. Mean error per frame for old method and hand eye calibration, once with best movement pairs, once for pairs in temporal order. For the Euler angles, the error is given in degrees, for the rotation in norm of the difference quaternion, and for the translation in mm.

2003

"... In PAGE 7: ... 1 (right). Table1 shows the errors for the two sequences ALF1 (55 frames) and ALF2 (100 frames) for the former method used, for the best movement pairs using the hand eye method, and for the hand eye algorithm where the relative movements were used in temporal order. The error per frame was computed between the actual endoscope poses (from the calibration pattern) and the poses computed by applying the hand-eye trans- formation to the robot arm data.... ..."

Cited by 1

### Table 1: Run times (hh:mm:ss) to learn hand-eye coordination

1994

Cited by 4