Results 1  10
of
570
A Tutorial on Visual Servo Control
 IEEE Transactions on Robotics and Automation
, 1996
"... This paper provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review ..."
Abstract

Cited by 669 (23 self)
 Add to MetaCart
This paper provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process. We then present a taxonomy of visual servo control systems. The two major classes of systems, positionbased and imagebased systems, are then discussed. Since any visual servo system must be capable of tracking image features in a sequence of images, we include an overview of featurebased and correlationbased methods for tracking. We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control. 1 Introduction Today there are over 800,000 robots in the world, mostly working in factory environment...
Potential Problems of Stability and Convergence in ImageBased and PositionBased Visual Servoing
, 1998
"... . Visual servoing, using imagebased control or positionbased control, generally gives satisfactory results. However, in some cases, convergence and stability problems may occur. The aim of this paper is to emphasize these problems by considering an eyeinhand system and a positioning task with res ..."
Abstract

Cited by 161 (66 self)
 Add to MetaCart
. Visual servoing, using imagebased control or positionbased control, generally gives satisfactory results. However, in some cases, convergence and stability problems may occur. The aim of this paper is to emphasize these problems by considering an eyeinhand system and a positioning task with respect to a static target which constrains the six camera degrees of freedom. To appear in: The Confluence of Vision and Control, Lecture Notes in Control and Informations Systems, SpringerVerlag, 1998. 1 Introduction The two classical approaches of visual servoing (that is imagebased control and positionbased control) are different in the nature of the inputs used in their respective control schemes [28,10,14]. Even if the resulting robot behaviors thus also differ, both approaches generally give satisfactory results: the convergence to the desired position is reached, and, thanks to the closedloop used in the control scheme, the system is stable, and robust with respect to camera calib...
Visual servo control Part I: basic approaches
 IEEE ROBOTICS AND AUTOMATION MAGAZINE
, 2006
"... This article is the first of a twopart series on the topic of visual servo control—using computer vision data in the servo loop to control the motion of a robot. In the present article, we describe the basic techniques that are by now well established in the field. We first give a general overview ..."
Abstract

Cited by 120 (28 self)
 Add to MetaCart
(Show Context)
This article is the first of a twopart series on the topic of visual servo control—using computer vision data in the servo loop to control the motion of a robot. In the present article, we describe the basic techniques that are by now well established in the field. We first give a general overview of the formulation of the visual servo control problem. We then describe the two archetypal visual servo control schemes: imagebased and positionbased visual servo control. Finally, we discuss performance and stability issues that pertain to these two schemes, motivating the second article in the series, in which we consider advanced techniques.
2 1/2 D Visual Servoing
 IEEE TRANS. ON ROBOTICS AND AUTOMATION
, 1999
"... In this paper, we propose a new approach to visionbased robot control, called 2 1/2 D visual servoing, which avoids the respective drawbacks of classical positionbased and imagebased visual servoing. Contrary to the positionbased visual servoing, our scheme does not need any geometric 3D model of ..."
Abstract

Cited by 101 (51 self)
 Add to MetaCart
In this paper, we propose a new approach to visionbased robot control, called 2 1/2 D visual servoing, which avoids the respective drawbacks of classical positionbased and imagebased visual servoing. Contrary to the positionbased visual servoing, our scheme does not need any geometric 3D model of the object. Furthermore and contrary to imagebased visual servoing, our approach ensures the convergence of the control law in the whole task space. 2 1/2 D visual servoing is based on the estimation of the partial camera displacement from the current to the desired camera poses at each iteration of the control law. Visual features and data extracted from the partial displacement allow us to design a decoupled control law controlling the six camera d.o.f. The robustness of our visual servoing scheme with respect to camera calibration errors is also analyzed: the necessary and sufficient conditions for local asymptotic stability are easily obtained. Then, due to the simple structure of the ...
Motion Strategies for Maintaining Visibility of a Moving Target
 In Proc. of the IEEE International Conference on Robotics & Automation (ICRA
, 1997
"... We introduce the problem of computing robot motion strategies that maintain visibility of a moving target in a cluttered workspace. Both motion constraints (as considered in standard motion planning) and visibility constraints (as considered in visual tracking) must be satisfied. Additional criteria ..."
Abstract

Cited by 90 (11 self)
 Add to MetaCart
We introduce the problem of computing robot motion strategies that maintain visibility of a moving target in a cluttered workspace. Both motion constraints (as considered in standard motion planning) and visibility constraints (as considered in visual tracking) must be satisfied. Additional criteria, such as the total distance traveled, can be optimized. The general problem is divided into two categories, on the basis of whether the target is predictable. For the predictable case, an algorithm that computes optimal, numerical solutions is presented. For the more challenging case of a partiallypredictable target, two online algorithms are presented that each attempt to maintain future visibility with limited prediction. One strategy maximizes the probability that the target will remain in view in a subsequent time step, and the other maximizes the minimum time in which the target could escape the visibility region. We additionally discuss issues resulting from our implementation and e...
Image Moments: A General and Useful Set of Features for Visual Servoing
, 2004
"... In this paper, we determine the analytical form of the interaction matrix related to any moment that can be computed from segmented images. The derivation method we present is based on Green's theorem. We apply this general result to classical geometrical primitives. We then consider using mome ..."
Abstract

Cited by 83 (21 self)
 Add to MetaCart
(Show Context)
In this paper, we determine the analytical form of the interaction matrix related to any moment that can be computed from segmented images. The derivation method we present is based on Green's theorem. We apply this general result to classical geometrical primitives. We then consider using moments in imagebased visual servoing. For that, we select six combinations of moments to control the six degrees of freedom of the system. These features are particularly adequate, if we consider a planar object and the configurations such that the object and camera planes are parallel at the desired position. The experimental results we present show that a correct behavior of the system is obtained if we consider either a simple symmetrical object or a planar object with complex and unknown shape.
Realtime markerless tracking for augmented reality: the virtual visual servoing framework
 IEEE TRANS. ON VISUALIZATION AND COMPUTER GRAPHICS
, 2006
"... Tracking is a very important research subject in a realtime augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a realtime, robust, and efficient 3D modelbased tracking algorithm is proposed for ..."
Abstract

Cited by 83 (23 self)
 Add to MetaCart
(Show Context)
Tracking is a very important research subject in a realtime augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a realtime, robust, and efficient 3D modelbased tracking algorithm is proposed for a “video see through ” monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. Here, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of pointtocurves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide realtime tracking of points normal to the object contours. Robustness is obtained by integrating an Mestimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D modelfree augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.
Active tracking of foveated feature clusters using affine structure
 International Journal of Computer Vision
, 1996
"... ..."
Improving VisionBased Control Using Efficient SecondOrder Minimization Techniques
, 2004
"... In this paper, several visionbased robot control methods are classified following an analogy with well known minimization methods. Comparing the rate of convergence between minimization algorithms helps us to understand the difference of performance of the control schemes. In particular, it is show ..."
Abstract

Cited by 74 (10 self)
 Add to MetaCart
In this paper, several visionbased robot control methods are classified following an analogy with well known minimization methods. Comparing the rate of convergence between minimization algorithms helps us to understand the difference of performance of the control schemes. In particular, it is shown that standard visionbased control methods have in general low rates of convergence. Thus, the performance of visionbased control could be improved using schemes which perform like the Newton minimization algorithm that has a high convergence rate. Unfortunately, the Newton minimization method needs the computation of second derivatives that can be illconditioned causing convergence problems. In order to solve these problems, this paper proposes two new control schemes based on efficient secondorder minimization techniques.
Theoretical Improvements in the Stability Analysis of a New Class of ModelFree Visual Servoing Methods
, 2002
"... This paper concerns the stability analysis of a new class of modelfree visual servoing methods. These methods are "modelfree" since they are based on the estimation of the relative camera orientation between two views of an object without knowing its 3D model. The visual servoing is deco ..."
Abstract

Cited by 70 (20 self)
 Add to MetaCart
(Show Context)
This paper concerns the stability analysis of a new class of modelfree visual servoing methods. These methods are "modelfree" since they are based on the estimation of the relative camera orientation between two views of an object without knowing its 3D model. The visual servoing is decoupled by controlling the rotation of the camera separately from the rest of the system. The way the remaining degrees of freedom are controlled di#erentiates the methods within the class. For all the methods of the class, the robustness with respect to both camera and handeye calibration errors can be analytically studied. In some cases, necessary and su#cient conditions can be found not only for the local asymptotic stability but also for the global asymptotic stability. In the other cases, simple conditions on the calibration errors are su#cient to ensure the global asymptotic stability of the control law. In addition to the theoretical proof of the stability, the experimental results prove the validity of the control strategy proposed in the paper.