Results 1  10
of
816
Visual servo control Part I: basic approaches
 IEEE ROBOTICS AND AUTOMATION MAGAZINE
, 2006
"... This article is the first of a twopart series on the topic of visual servo control—using computer vision data in the servo loop to control the motion of a robot. In the present article, we describe the basic techniques that are by now well established in the field. We first give a general overview ..."
Abstract

Cited by 201 (34 self)
 Add to MetaCart
(Show Context)
This article is the first of a twopart series on the topic of visual servo control—using computer vision data in the servo loop to control the motion of a robot. In the present article, we describe the basic techniques that are by now well established in the field. We first give a general overview of the formulation of the visual servo control problem. We then describe the two archetypal visual servo control schemes: imagebased and positionbased visual servo control. Finally, we discuss performance and stability issues that pertain to these two schemes, motivating the second article in the series, in which we consider advanced techniques.
Potential Problems of Stability and Convergence in ImageBased and PositionBased Visual Servoing
, 1998
"... . Visual servoing, using imagebased control or positionbased control, generally gives satisfactory results. However, in some cases, convergence and stability problems may occur. The aim of this paper is to emphasize these problems by considering an eyeinhand system and a positioning task with res ..."
Abstract

Cited by 192 (71 self)
 Add to MetaCart
. Visual servoing, using imagebased control or positionbased control, generally gives satisfactory results. However, in some cases, convergence and stability problems may occur. The aim of this paper is to emphasize these problems by considering an eyeinhand system and a positioning task with respect to a static target which constrains the six camera degrees of freedom. To appear in: The Confluence of Vision and Control, Lecture Notes in Control and Informations Systems, SpringerVerlag, 1998. 1 Introduction The two classical approaches of visual servoing (that is imagebased control and positionbased control) are different in the nature of the inputs used in their respective control schemes [28,10,14]. Even if the resulting robot behaviors thus also differ, both approaches generally give satisfactory results: the convergence to the desired position is reached, and, thanks to the closedloop used in the control scheme, the system is stable, and robust with respect to camera calib...
GraspIt!  A Versatile Simulator for Robotic Grasping
, 2004
"... Research in robotic grasping has flourished in the last 25 years. A recent survey by Bicchi [1] covered over 140 papers, and many more than that have been published. Stemming from our desire to implement some of the work in grasp analysis for particular hand designs, we created an interactive graspi ..."
Abstract

Cited by 174 (20 self)
 Add to MetaCart
Research in robotic grasping has flourished in the last 25 years. A recent survey by Bicchi [1] covered over 140 papers, and many more than that have been published. Stemming from our desire to implement some of the work in grasp analysis for particular hand designs, we created an interactive grasping simulator that can import a wide variety of hand and object models and can evaluate the grasps formed by these hands. This system, dubbed “GraspIt!,” has since expanded in scope to the point where we feel it could serve as a useful tool for other researchers in the field. To that end, we are making the system publicly available (GraspIt! is available for download for a variety of platforms from
2 1/2 D Visual Servoing
 IEEE TRANS. ON ROBOTICS AND AUTOMATION
, 1999
"... In this paper, we propose a new approach to visionbased robot control, called 2 1/2 D visual servoing, which avoids the respective drawbacks of classical positionbased and imagebased visual servoing. Contrary to the positionbased visual servoing, our scheme does not need any geometric 3D model of ..."
Abstract

Cited by 119 (56 self)
 Add to MetaCart
In this paper, we propose a new approach to visionbased robot control, called 2 1/2 D visual servoing, which avoids the respective drawbacks of classical positionbased and imagebased visual servoing. Contrary to the positionbased visual servoing, our scheme does not need any geometric 3D model of the object. Furthermore and contrary to imagebased visual servoing, our approach ensures the convergence of the control law in the whole task space. 2 1/2 D visual servoing is based on the estimation of the partial camera displacement from the current to the desired camera poses at each iteration of the control law. Visual features and data extracted from the partial displacement allow us to design a decoupled control law controlling the six camera d.o.f. The robustness of our visual servoing scheme with respect to camera calibration errors is also analyzed: the necessary and sufficient conditions for local asymptotic stability are easily obtained. Then, due to the simple structure of the ...
Realtime markerless tracking for augmented reality: the virtual visual servoing framework
 IEEE TRANS. ON VISUALIZATION AND COMPUTER GRAPHICS
, 2006
"... Tracking is a very important research subject in a realtime augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a realtime, robust, and efficient 3D modelbased tracking algorithm is proposed for ..."
Abstract

Cited by 114 (29 self)
 Add to MetaCart
(Show Context)
Tracking is a very important research subject in a realtime augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a realtime, robust, and efficient 3D modelbased tracking algorithm is proposed for a “video see through ” monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. Here, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of pointtocurves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide realtime tracking of points normal to the object contours. Robustness is obtained by integrating an Mestimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D modelfree augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.
Motion Strategies for Maintaining Visibility of a Moving Target
 In Proc. of the IEEE International Conference on Robotics & Automation (ICRA
, 1997
"... We introduce the problem of computing robot motion strategies that maintain visibility of a moving target in a cluttered workspace. Both motion constraints (as considered in standard motion planning) and visibility constraints (as considered in visual tracking) must be satisfied. Additional criteria ..."
Abstract

Cited by 109 (11 self)
 Add to MetaCart
We introduce the problem of computing robot motion strategies that maintain visibility of a moving target in a cluttered workspace. Both motion constraints (as considered in standard motion planning) and visibility constraints (as considered in visual tracking) must be satisfied. Additional criteria, such as the total distance traveled, can be optimized. The general problem is divided into two categories, on the basis of whether the target is predictable. For the predictable case, an algorithm that computes optimal, numerical solutions is presented. For the more challenging case of a partiallypredictable target, two online algorithms are presented that each attempt to maintain future visibility with limited prediction. One strategy maximizes the probability that the target will remain in view in a subsequent time step, and the other maximizes the minimum time in which the target could escape the visibility region. We additionally discuss issues resulting from our implementation and e...
Improving VisionBased Control Using Efficient SecondOrder Minimization Techniques
, 2004
"... In this paper, several visionbased robot control methods are classified following an analogy with well known minimization methods. Comparing the rate of convergence between minimization algorithms helps us to understand the difference of performance of the control schemes. In particular, it is show ..."
Abstract

Cited by 107 (15 self)
 Add to MetaCart
In this paper, several visionbased robot control methods are classified following an analogy with well known minimization methods. Comparing the rate of convergence between minimization algorithms helps us to understand the difference of performance of the control schemes. In particular, it is shown that standard visionbased control methods have in general low rates of convergence. Thus, the performance of visionbased control could be improved using schemes which perform like the Newton minimization algorithm that has a high convergence rate. Unfortunately, the Newton minimization method needs the computation of second derivatives that can be illconditioned causing convergence problems. In order to solve these problems, this paper proposes two new control schemes based on efficient secondorder minimization techniques.
Image Moments: A General and Useful Set of Features for Visual Servoing
, 2004
"... In this paper, we determine the analytical form of the interaction matrix related to any moment that can be computed from segmented images. The derivation method we present is based on Green's theorem. We apply this general result to classical geometrical primitives. We then consider using mome ..."
Abstract

Cited by 104 (22 self)
 Add to MetaCart
(Show Context)
In this paper, we determine the analytical form of the interaction matrix related to any moment that can be computed from segmented images. The derivation method we present is based on Green's theorem. We apply this general result to classical geometrical primitives. We then consider using moments in imagebased visual servoing. For that, we select six combinations of moments to control the six degrees of freedom of the system. These features are particularly adequate, if we consider a planar object and the configurations such that the object and camera planes are parallel at the desired position. The experimental results we present show that a correct behavior of the system is obtained if we consider either a simple symmetrical object or a planar object with complex and unknown shape.
Realtime imagebased tracking of planes using efficient secondorder minimization
 Proceedings of the International Conference on Intelligent Robots and Systems
, 2004
"... Abstract — The tracking algorithm presented in this paper is based on minimizing the sumofsquareddifference between a given template and the current image. Theoretically, amongst all standard minimization algorithms, the Newton method has the highest local convergence rate since it is based on a ..."
Abstract

Cited by 101 (20 self)
 Add to MetaCart
(Show Context)
Abstract — The tracking algorithm presented in this paper is based on minimizing the sumofsquareddifference between a given template and the current image. Theoretically, amongst all standard minimization algorithms, the Newton method has the highest local convergence rate since it is based on a secondorder Taylor series of the sumofsquareddifferences. However, the Newton method is time consuming since it needs the computation of the Hessian. In addition, if the Hessian is not positive definite, convergence problems can occur. That is why several methods use an approximation of the Hessian. The price to pay is the loss of the high convergence rate. The aim of this paper is to propose a tracking algorithm based on a secondorder minimization method which does not need to compute the Hessian. I.
Path planning for robust imagebased control
 IEEE Trans. Robot. Autom
, 2002
"... Abstract — Vision feedback control loop techniques are efficient for a large class of applications but they come up against difficulties when the initial and desired robot positions are distant. Classical approaches are based on the regulation to zero of an error function computed from the current m ..."
Abstract

Cited by 87 (24 self)
 Add to MetaCart
(Show Context)
Abstract — Vision feedback control loop techniques are efficient for a large class of applications but they come up against difficulties when the initial and desired robot positions are distant. Classical approaches are based on the regulation to zero of an error function computed from the current measurement and a constant desired one. By using such approach, it is not obvious to introduce any constraint in the realized trajectories and to ensure the convergence for all the initial configurations. In this paper, we propose a new approach to resolve these difficulties by coupling path planning in image space and imagebased control. Constraints such that the object remains in the camera field of view or the robot avoids its joint limits can be taken into account at the task planning level. Furthermore, by using this approach, current measurements always remain close to their desired value and a control by imagebased servoing ensures the robustness with respect to modeling errors. The proposed method is based on the potential field approach and is applied when object shape and dimensions are known or not, and when the calibration parameters of the camera are well or badly estimated. Finally, real time experimental results using an eyeinhand robotic system are presented and confirm the validity of our approach. Index Terms — Path planning, Visual servoing I.