Results 1 -
9 of
9
Manipulation in human environments
- in Int’l Conf Humanoid Robots. IEEE
, 2006
"... Abstract — Robots that work alongside us in our homes and workplaces could extend the time an elderly person can live at home, provide physical assistance to a worker on an assembly line, or help with household chores. In order to assist us in these ways, robots will need to successfully perform man ..."
Abstract
-
Cited by 94 (7 self)
- Add to MetaCart
(Show Context)
Abstract — Robots that work alongside us in our homes and workplaces could extend the time an elderly person can live at home, provide physical assistance to a worker on an assembly line, or help with household chores. In order to assist us in these ways, robots will need to successfully perform manipulation tasks within human environments. Human environments present special challenges for robot manipulation since they are complex, dynamic, uncontrolled, and difficult to perceive reliably. In this paper we present a behavior-based control system that enables a humanoid robot, Domo, to help a person place objects on a shelf. Domo is able to physically locate the shelf, socially cue a person to hand it an object, grasp the object that has been handed to it, transfer the object to the hand that is closest to the shelf, and place the object on the shelf. We use this behavior-based control system to illustrate three themes that characterize our approach to manipulation in human environments. The first theme, cooperative manipulation, refers to the advantages that can be gained by having the robot work with a person to cooperatively perform manipulation tasks. The second theme, task relevant features, emphasizes the benefits of carefully selecting the aspects of the world that are to be perceived and acted upon during a manipulation task. The third theme, let the body do the thinking, encompasses several ways in which a robot can use its body to simplify manipulation tasks. 1 Fig. 1. The humanoid robot Domo used in this paper. I.
Robot manipulation of human tools: Autonomous detection and control of task relevant features
- In In Submission to: 5th IEEE International Conference on Development and Learning (ICDL-06
, 2006
"... Abstract — The efficient acquisition and generalization of skills for manual tasks requires that a robot be able to perceive and control the important aspects of an object while ignoring irrelevant factors. For many tasks involving everyday toollike objects, detection and control of the distal end o ..."
Abstract
-
Cited by 22 (7 self)
- Add to MetaCart
(Show Context)
Abstract — The efficient acquisition and generalization of skills for manual tasks requires that a robot be able to perceive and control the important aspects of an object while ignoring irrelevant factors. For many tasks involving everyday toollike objects, detection and control of the distal end of the object is sufficient for its use. For example, a robot could pour a substance from a bottle by controlling the position and orientation of the mouth. Likewise, the canonical tasks associated with a screwdriver, hammer, or pen rely on control of the tool’s tip. In this paper, we present methods that allow a robot to autonomously detect and control the tip of a tool-like object. We also show results for modeling the appearance of this important type of task relevant feature. 1 I.
Toward robot learning of tool manipulation from human demonstration
, 2007
"... Abstract — Robots that manipulate everyday tools in unstructured, human settings could more easily work with people and perform tasks that are important to people. Task demonstration could serve as an intuitive way for people to program robots to perform tasks. By focusing on task-relevant features ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract — Robots that manipulate everyday tools in unstructured, human settings could more easily work with people and perform tasks that are important to people. Task demonstration could serve as an intuitive way for people to program robots to perform tasks. By focusing on task-relevant features during both the demonstration and the execution of a task, a robot could more robustly emulate the important characteristics of the task and generalize what it has learned. In this paper we describe a method for robot task learning that makes use of the perception and control of the tip of a tool. For this approach, the robot monitors the tool’s tip during human use, extracts the trajectory of this task relevant feature, and then manipulates the tool by controlling this feature. We present preliminary results where a humanoid robot learns to clean a flexible hose with a brush. This task is accomplished in an unstructured environment without prior models of the objects or task. I.
Toward Robot Learning of Tool Manipulation from Human Demonstration
"... Abstract — Robots that manipulate everyday tools in unstructured, human settings could more easily work with people and perform tasks that are important to people. Task demonstration could serve as an intuitive way for people to program robots to perform tasks. By focusing on task-relevant features ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract — Robots that manipulate everyday tools in unstructured, human settings could more easily work with people and perform tasks that are important to people. Task demonstration could serve as an intuitive way for people to program robots to perform tasks. By focusing on task-relevant features during both the demonstration and the execution of a task, a robot could more robustly emulate the important characteristics of the task and generalize what it has learned. In this paper we describe a method for robot task learning that makes use of the perception and control of the tip of a tool. For this approach, the robot monitors the tool’s tip during human use, extracts the trajectory of this task relevant feature, and then manipulates the tool by controlling this feature. We present preliminary results where a humanoid robot learns to clean a flexible hose with a brush. This task is accomplished in an unstructured environment without prior models of the objects or task. I.
Learning Grasp Affordances with Variable Tool Point Offsets
"... Abstract — When grasping an object, a robot must identify the available forms of interaction with that object. Each of these forms of interaction, a grasp affordance, describes one canonical option for placing the hand and fingers with respect to the object as an agent prepares to grasp it. The affo ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract — When grasping an object, a robot must identify the available forms of interaction with that object. Each of these forms of interaction, a grasp affordance, describes one canonical option for placing the hand and fingers with respect to the object as an agent prepares to grasp it. The affordance does not represent a single hand posture, but an entire manifold within a space that describes hand position/orientation and finger configuration. Our challenges are 1) how to represent this manifold in as compact a manner as possible, and 2) how to extract these affordance representations given a set of example grasps as demonstrated by a human teacher. In this paper, we approach the problem of representation by capturing all instances of a canonical grasp using a joint probability density function (PDF) in a hand posture space. The PDF captures in an object-centered coordinate frame a combination of hand orientation, tool point position and offset from hand to tool point. The set of canonical grasps is then represented using a mixture distribution model. We address the problem of learning the model parameters from a set of example grasps using a clustering approach based on expectation maximization. Our experiments show that the learned canonical grasps correspond to the functionally different ways that the object may be grasped. In addition, by including the tool point/hand relationship within the learned model, the approach is capable of separating different grasp types, even when the different types involve similar hand postures. I.
Learning Visual Features that Predict Grasp Type and Location
"... Abstract — J. J. Gibson suggested that objects in our environment can be represented by an agent in terms of the types of actions that the agent may perform on or with the object. This affordance representation allows the agent to make a connection between the perception of key properties of an obje ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract — J. J. Gibson suggested that objects in our environment can be represented by an agent in terms of the types of actions that the agent may perform on or with the object. This affordance representation allows the agent to make a connection between the perception of key properties of an object and these actions. In this paper, we explore the automatic construction of visual representations that are associated with components of objects that afford certain types of grasping actions. A training data set of images is labeled with regions corresponding to locations at which certain grasp types could be applied to the object. A classifier is trained to predict whether particular image pixels correspond to these grasp regions. Each pixel that is classified as a positive example of a grasp region votes for its surrounding image region. If there exists a pixel with a large enough number of votes, then the image is considered to afford the grasp and the location of the pixel is identified as the best grasp point. Experimental results show that the approach is capable of identifying the occurrence of both handle-type and ball-type grasp options in images containing novel objects. I.
unknown title
"... These controllers include smooth pursuit visual tracking, inverse kinematic reaching, and operation space control of the arm [77]. This layer also provides TCP/IP interprocess communication among the Linux cluster’s 1Gb LAN. We use the Yarp software package developed by Metta and Fitzpatrick [91]. W ..."
Abstract
- Add to MetaCart
These controllers include smooth pursuit visual tracking, inverse kinematic reaching, and operation space control of the arm [77]. This layer also provides TCP/IP interprocess communication among the Linux cluster’s 1Gb LAN. We use the Yarp software package developed by Metta and Fitzpatrick [91]. We implemented a custom python-Yarp interface, allowing us to dynamically define and transmit data structures between processes at rates up to 100hz. Additionally, two FireWire framegrabbers provide synchronized image pairs to the cluster. Finally, all image and sensory data are timestamped using the hardware clock from the CANbus PCI card. This ensures synchronization of the data up to the transmit time of the 1Gb LAN. 4.7.4 Behavior Layer The behavior layer implements the robot’s visual processing, learning, and task behaviors. These algorithms are run within our behavior-based architecture named Slate. 4.8 Slate: A Behavior Based Architecture We have developed a behavior based architecture named Slate. What is meant by a robot architecture? According to Mataric [90], An architecture provides a principled way of organizing a control system. However, in addition to providing structure, it imposes constraints on the way the control problem can be solved. Following Mataric, Arkin [4] notes the common aspects of behavior-based architectures: • emphasis on the importance of coupling sensing and action tightly • avoidance of representational symbolic knowledge • decomposition into contextually meaningful units Roboticists have developed many flavors of behavior based architectures. We refer to Arkin for a review [4]. Loosely stated, Slate is a lightweight architecture for organizing perception and control. It is implemented as a programming abstraction in Python that allows one to easily define many small computational threads. These threads can run at parameterized 66 slate arbitrator thread threa s d fs slate module thread thread slate thread s a s proprioception yarp communication slate scheduler process module thread
IMAGE CLASSIFICATION WITH BAGS OF LOCAL FEATURES
, 2006
"... First of all I thank my parents for bringing me into this world and for urging me to make something of myself. My dad taught me discipline and rationality, and my mom inspired me to be creative. Together with my grandparents they instilled in me the drive to succeed that has helped me to get through ..."
Abstract
- Add to MetaCart
First of all I thank my parents for bringing me into this world and for urging me to make something of myself. My dad taught me discipline and rationality, and my mom inspired me to be creative. Together with my grandparents they instilled in me the drive to succeed that has helped me to get through seven years of grad school. I thank my lab-mates for their help and support. Without them this dissertation,