Results 1 - 10
of
22
Active learning of inverse models with intrinsically motivated goal exploration in robots
- ROBOTICS AND AUTONOMOUS SYSTEMS
, 2013
"... ..."
(Show Context)
Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining
"... We introduce skill chaining, a skill discovery method for reinforcement learning agents in continuous domains. Skill chaining produces chains of skills leading to an end-of-task reward. We demonstrate experimentally that skill chaining is able to create appropriate skills in a challenging continuous ..."
Abstract
-
Cited by 39 (8 self)
- Add to MetaCart
(Show Context)
We introduce skill chaining, a skill discovery method for reinforcement learning agents in continuous domains. Skill chaining produces chains of skills leading to an end-of-task reward. We demonstrate experimentally that skill chaining is able to create appropriate skills in a challenging continuous domain and that doing so results in performance gains. 1
Constructing skill trees for reinforcement learning agents from demonstration trajectories
- In Advances in Neural Information Processing Systems (NIPS
, 2010
"... We introduce CST, an algorithm for constructing skill trees from demonstration trajectories in continuous reinforcement learning domains. CST uses a changepoint detection method to segment each trajectory into a skill chain by detecting a change of appropriate abstraction, or that a segment is too c ..."
Abstract
-
Cited by 30 (7 self)
- Add to MetaCart
(Show Context)
We introduce CST, an algorithm for constructing skill trees from demonstration trajectories in continuous reinforcement learning domains. CST uses a changepoint detection method to segment each trajectory into a skill chain by detecting a change of appropriate abstraction, or that a segment is too complex to model as a single skill. The skill chains from each trajectory are then merged to form a skill tree. We demonstrate that CST constructs an appropriate skill tree that can be further refined through learning in a challenging continuous domain, and that it can be used to segment demonstration trajectories on a mobile manipulator into chains of skills where each skill is assigned an appropriate abstraction. 1
Autonomous Skill Acquisition on a Mobile Manipulator
"... We describe a robot system that autonomously acquires skills through interaction with its environment. The robot learns to sequence the execution of a set of innate controllers to solve a task, extracts and retains components of that solution as portable skills, and then transfers those skills to re ..."
Abstract
-
Cited by 13 (5 self)
- Add to MetaCart
We describe a robot system that autonomously acquires skills through interaction with its environment. The robot learns to sequence the execution of a set of innate controllers to solve a task, extracts and retains components of that solution as portable skills, and then transfers those skills to reduce the time required to learn to solve a second task.
A comparison of strategies for developmental action acquisition
- in QLAP,” in Proc. of the Int. Conf. on Epigenetic Robotics (under review
, 2009
"... An important part of development is acquiring actions to interact with the environment. We have developed a computational model of autonomous action acquisition, called QLAP. In this paper we investigate different strategies for developmental action acquisition within this model. In particular, we i ..."
Abstract
-
Cited by 4 (4 self)
- Add to MetaCart
(Show Context)
An important part of development is acquiring actions to interact with the environment. We have developed a computational model of autonomous action acquisition, called QLAP. In this paper we investigate different strategies for developmental action acquisition within this model. In particular, we introduce a way to actively learn actions and we compare this active action acquisition with passive learning of actions. We also compare curiosity based exploration with random exploration. And finally, we examine the effects of resource restrictions on the agent’s ability to learn actions. 1.
World survey of artificial brains, Part II: Biologically inspired . . .
- NEUROCOMPUTING
, 2010
"... ..."
Skill reuse in lifelong developmental learning
- In IROS 2009 Workshop: Autonomous Mental Development for Intelligent Robots and Systems
, 2009
"... Abstract — Development requires learning skills using previously learned skills as building blocks. For maximum flexibility, the developing agent should be able to learn these skills without being provided an explicit task or subtasks. We have developed a method that allows an agent to simultaneousl ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract — Development requires learning skills using previously learned skills as building blocks. For maximum flexibility, the developing agent should be able to learn these skills without being provided an explicit task or subtasks. We have developed a method that allows an agent to simultaneously learn hierarchical actions and important distinctions autonomously without being specified a task. This ability to learn new distinctions allows it to work in continuous environments, and allows an agent to learn its first actions from motor primitives. In this paper we demonstrate that our method can use experience from one set of variables to more quickly learn a task that requires additional new variables. It does this by learning actions in a developmental progression. In addition, we demonstrate this developmental progression by showing that actions that are mainly used for exploration when first learned are later used as subactions for other actions. I.
Action Acquisition in QLAP
"... An important part of development is acquiring actions to interact with the environment. We have developed a computational model of autonomous action acquisition, called QLAP. In this paper we investigate different strategies for developmental action acquisition within this model. In particular, we i ..."
Abstract
- Add to MetaCart
(Show Context)
An important part of development is acquiring actions to interact with the environment. We have developed a computational model of autonomous action acquisition, called QLAP. In this paper we investigate different strategies for developmental action acquisition within this model. In particular, we introduce a way to actively learn actions and we compare this active action acquisition with passive learning of actions. We also compare curiosity based exploration with random exploration. And finally, we examine the effects of resource restrictions on the agent’s ability to learn actions. 1.
Behavioral Hierarchy: Exploration and Representation
"... Behavioral modules are units of behavior providing reusable building blocks that can be composed sequentially and hierarchically to generate extensive ranges of behavior. Hierarchies of behavioral modules facilitate learning complex skills and planning at multiple levels of abstraction and enable ag ..."
Abstract
- Add to MetaCart
Behavioral modules are units of behavior providing reusable building blocks that can be composed sequentially and hierarchically to generate extensive ranges of behavior. Hierarchies of behavioral modules facilitate learning complex skills and planning at multiple levels of abstraction and enable agents to incrementally improve their competence for facing new challenges that arise over extended periods of time. This chapter focusses on two features of behavioral hierarchy that appear to be less well recognized: its influence on exploratory behavior and the opportunity it affords to reduce the representational challenges of planning and learning in large, complex domains. Four computational examples are described that use methods of hierarchical reinforcement learning to illustrate the influence of behavioral hierarchy on exploration and representation. Beyond illustrating these features, the examples provide support for the central role of behavioral hierarchy in development and learning for both artificial and natural agents. 1