## Visual servo control Part I: basic approaches (2006)

Venue: | IEEE ROBOTICS AND AUTOMATION MAGAZINE |

Citations: | 106 - 28 self |

### BibTeX

@ARTICLE{Chaumette06visualservo,

author = {François Chaumette and Seth Hutchinson},

title = {Visual servo control Part I: basic approaches},

journal = {IEEE ROBOTICS AND AUTOMATION MAGAZINE},

year = {2006}

}

### Years of Citing Articles

### OpenURL

### Abstract

This article is the first of a two-part series on the topic of visual servo control—using computer vision data in the servo loop to control the motion of a robot. In the present article, we describe the basic techniques that are by now well established in the field. We first give a general overview of the formulation of the visual servo control problem. We then describe the two archetypal visual servo control schemes: image-based and position-based visual servo control. Finally, we discuss performance and stability issues that pertain to these two schemes, motivating the second article in the series, in which we consider advanced techniques.

### Citations

2467 |
Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography
- FISCHLER, BOLLES
- 1981
(Show Context)
Citation Context ...exist some configurations for which Lx is singular [5]. Furthermore, there exist four distinct camera poses for which e = 0, i.e., four global minima exist, and it is impossible to differentiate them =-=[6]-=-. For these reasons, more than three points are usually considered. Approximating the Interaction Matrix There are several choices available for constructing the estimate � L + e to be used in the con... |

600 | A tutorial on visual servo control
- Hutchinson, Hager, et al.
- 1996
(Show Context)
Citation Context ...tructing the estimate � L + e to be used in the control law. One popular scheme is, of course, to choose � L + e = L+ e if Le = Lx is known; that is, if the current depth Z of each point is available =-=[7]-=-. In practice, these parameters must be estimated at each iteration of the control scheme. The basic methods presented in this article use classical pose estimation methods that will be briefly presen... |

527 |
A new approach to visual servoing in robotics
- Espiau, Chaumette, et al.
- 1992
(Show Context)
Citation Context ... estimation methods that will be briefly presented in the next section. Another popular Lx2 Lx3sapproach is to choose � L + e = L+ e∗, where Le∗ is the value of Le for the desired position e = e∗ = 0 =-=[8]-=-. In this case, � L + e is constant, and only the desired depth of each point has to be set, which means no varying 3-D parameters have to be estimated during the visual servo. Finally, the choice �L ... |

391 | Three-dimensional object recognition from single two-dimensional images
- Lowe
- 1987
(Show Context)
Citation Context ...omputer vision problem is called the 3D localization problem. While this problem is beyond the scope of the present tutorial, many solutions have been presented in the literature (see, e.g., [16] and =-=[17]-=-). It is then typical to define s in terms of the parameterization used to represent the camera pose. Note that the parameters a involved in the definition (1) of s are now the camera intrinsic parame... |

180 |
An Invitation to 3–D Vision: From Images to Geometric Models
- Ma, Soatto, et al.
- 2003
(Show Context)
Citation Context .... In this case, we take s = x = (x, y), the image plane coordinates of the point. The details of imaging geometry and perspective projection can be found in many computer vision texts, including [3], =-=[4]-=-. Taking the time derivative of the projection equations (6), we obtain � ˙x = X ˙ /Z − X ˙Z/Z 2 = ( X˙ − x ˙Z)/Z ˙y = ˙Y/Z − Y ˙Z/Z 2 = ( ˙Y − y ˙Z)/Z. (7) We can relate the velocity of the 3-D point... |

142 | Potential problems of stability and convergence in image-based and position-based visual servoing
- Chaumette
- 1998
(Show Context)
Citation Context ...l and desired configurations is very large, this phenomenon is amplified and leads to a particular case for a rotation of π rad where no rotational motion at all will be induced by the control scheme =-=[11]-=-. On the other hand, when the rotation is small, this phenomenon almost disappears. To conclude, the behavior is locally satisfactory (i.e., when the error is small), but it can be unsatisfactory when... |

132 |
Dynamic sensor-based control of robots with visual feedback
- Weiss, Sanderson, et al.
- 1987
(Show Context)
Citation Context ...nce characteristics of the resulting closed-loop system? These questions are addressed in the remainder of the article. Classical Image-Based Visual Servo Traditional image-based control schemes [1], =-=[2]-=- use the image-plane coordinates of a set of points (other choices are possible, but we defer discussion of these for Part II of the tutorial) to define the set s. The image measurements m are usually... |

124 |
Relative endeffector control using cartesian position based visual servoing
- Wilson, Hulls, et al.
- 1996
(Show Context)
Citation Context ...ures set s. Such an approach would be, strictly speaking, a position-based approach, since it would require 3-D parameters in s. Position-Based Visual Servo Position-based control schemes (PBVS) [2], =-=[14]-=-, [15] use the pose of the camera with respect to some reference coordinate frame to define s. Computing that pose from a set of measurements in one image necessitates the camera intrinsic parameters ... |

116 |
Robot manipulators: mathematics, programming, and control
- Paul
- 1981
(Show Context)
Citation Context ...interaction matrix related to xs can be determined using the spatial motion transform matrix V to transform velocities expressed in the left or right cameras frames to the sensor frame. V is given by =-=[13]-=- IEEE Robotics & Automation Magazine DECEMBER 2006 V = � � R [t]×R , (12) 0 R where [t]× is the skew symmetric matrix associated to the vector t and where (R,t) ∈ SE(3) is the rigid body transformatio... |

86 |
A new partitioned approach to image-based visual servo control
- Corke, Hutchinson
- 2001
(Show Context)
Citation Context ...to realize this image motion can be easily deduced and is indeed composed of a rotational motion around the optical axis, but is combined with a retreating translational motion along the optical axis =-=[10]-=-. This unexpected motion is due to the choice of the features and to the coupling between the third and sixth columns in the interaction matrix. If the rotation between the initial and desired configu... |

81 |
Nonlinear control systems, 3rd ed
- Isidori
- 1995
(Show Context)
Citation Context ...od of e = e ∗ = 0 if �L + e Le > 0, (20) where � L + e Le ∈ R6×6 . Indeed, only the linearized system ˙e ′ =−λ� L + e Lee ′ has to be considered if we are interested in the local asymptotic stability =-=[20]-=-. Once again, if the features are chosen and the control scheme designed so that Le and � L + e are of full rank 6, then condition (20) is ensured if the approximations involved in �L + e are not too ... |

73 | Improving vision-based control using efficient second-order minimization techniques
- Malis
- 2004
(Show Context)
Citation Context ...esired depth of each point has to be set, which means no varying 3-D parameters have to be estimated during the visual servo. Finally, the choice �L + e = 1/2(Le + Le∗)+ has recently been proposed in =-=[9]-=-. Since Le is involved in this method, the current depth of each point must also be available. We illustrate the behavior of these control schemes with an example. The goal is to position the camera s... |

55 | 2-1/2D visual servoing
- Malis, Chaumette, et al.
- 1999
(Show Context)
Citation Context ... = ( to,0), and e = ( c ∗ c to − to,θu). In this case, the interaction matrix related to e is given by � −I3 [ Le = c � to] × , (13) 0 Lθu in which I3 is the 3 × 3 identity matrix and Lθu is given by =-=[18]-=-: Lθu = I3 − θ 2 [u]× � + 1 − sincθ � [u] sinc2 θ 2 × , (14) where sinc x is the sinus cardinal defined such that x sinc x = sin x and sinc 0 = 1. Following the developments presented at the beginning... |

49 |
Vision-guided servoing with feature-based trajectory generation
- Feddema, Mitchell
- 1989
(Show Context)
Citation Context ...formance characteristics of the resulting closed-loop system? These questions are addressed in the remainder of the article. Classical Image-Based Visual Servo Traditional image-based control schemes =-=[1]-=-, [2] use the image-plane coordinates of a set of points (other choices are possible, but we defer discussion of these for Part II of the tutorial) to define the set s. The image measurements m are us... |

46 |
Computer Vision: A Modern Approach. Upper Saddle River
- Forsyth, Ponce
- 2003
(Show Context)
Citation Context ...sions. In this case, we take s = x = (x, y), the image plane coordinates of the point. The details of imaging geometry and perspective projection can be found in many computer vision texts, including =-=[3]-=-, [4]. Taking the time derivative of the projection equations (6), we obtain � ˙x = X ˙ /Z − X ˙Z/Z 2 = ( X˙ − x ˙Z)/Z ˙y = ˙Y/Z − Y ˙Z/Z 2 = ( ˙Y − y ˙Z)/Z. (7) We can relate the velocity of the 3-D ... |

37 |
Robot feedback control based on stereo vision: Towards calibration-free hand-eye coordination
- Hager, Chang, et al.
- 1995
(Show Context)
Citation Context ...� L + e = 1 2 (Le + L ∗ e )+ .s86 s = xs = (xl,xr) = (x l, y l, xr, y r), i.e., to represent the point by just stacking in s the x and y coordinates of the observed point in the left and right images =-=[12]-=-. However, care must be taken when constructing the corresponding interaction matrix since the form given in (10) is expressed in either the left or right camera frame. More precisely, we have: � ˙xl ... |

29 | Visual servoing invariant to changes in camera intrinsic parameters
- Malis
- 2004
(Show Context)
Citation Context ...r e ′ with e ′ = � L + e e. The time derivative of this error is given by ˙e ′ = � L + e ˙e + ˙� L + e e = ( � L + e Le + O)vc, where O ∈ R6×6 is equal to 0 when e = 0, whatever the choice of � L + e =-=[19]-=-. Using the control scheme (5), we obtain: ˙e ′ =−λ( � L + e Le + O)e ′ , which is known to be locally asymptotically stable in a neighborhood of e = e ∗ = 0 if �L + e Le > 0, (20) where � L + e Le ∈ ... |

20 |
Model-based object pose in 25 lines of code,” Int
- DeMenthon, Davis
- 1995
(Show Context)
Citation Context ...assical computer vision problem is called the 3D localization problem. While this problem is beyond the scope of the present tutorial, many solutions have been presented in the literature (see, e.g., =-=[16]-=- and [17]). It is then typical to define s in terms of the parameterization used to represent the camera pose. Note that the parameters a involved in the definition (1) of s are now the camera intrins... |

19 |
Position based visual servoing: Keeping the object in the field of vision
- Thuilot, Martinet, et al.
- 2002
(Show Context)
Citation Context ...et s. Such an approach would be, strictly speaking, a position-based approach, since it would require 3-D parameters in s. Position-Based Visual Servo Position-based control schemes (PBVS) [2], [14], =-=[15]-=- use the pose of the camera with respect to some reference coordinate frame to define s. Computing that pose from a set of measurements in one image necessitates the camera intrinsic parameters and th... |

10 |
Singularities in the determination of the situation of a robot effector from the perspective view of three points
- Michel, Rives
- 1993
(Show Context)
Citation Context ...we use the feature vector x = (x1,x2,x3), by merely stacking interaction matrices for three points we obtain � � Lx1 Lx = . In this case, there will exist some configurations for which Lx is singular =-=[5]-=-. Furthermore, there exist four distinct camera poses for which e = 0, i.e., four global minima exist, and it is impossible to differentiate them [6]. For these reasons, more than three points are usu... |