Results 1 -
4 of
4
Skill reuse in lifelong developmental learning
- In IROS 2009 Workshop: Autonomous Mental Development for Intelligent Robots and Systems
, 2009
"... Abstract — Development requires learning skills using previously learned skills as building blocks. For maximum flexibility, the developing agent should be able to learn these skills without being provided an explicit task or subtasks. We have developed a method that allows an agent to simultaneousl ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract — Development requires learning skills using previously learned skills as building blocks. For maximum flexibility, the developing agent should be able to learn these skills without being provided an explicit task or subtasks. We have developed a method that allows an agent to simultaneously learn hierarchical actions and important distinctions autonomously without being specified a task. This ability to learn new distinctions allows it to work in continuous environments, and allows an agent to learn its first actions from motor primitives. In this paper we demonstrate that our method can use experience from one set of variables to more quickly learn a task that requires additional new variables. It does this by learning actions in a developmental progression. In addition, we demonstrate this developmental progression by showing that actions that are mainly used for exploration when first learned are later used as subactions for other actions. I.
An Overview of the Qualitative Learner of Action and Perception (QLAP)
, 2010
"... An agent, human or otherwise, receives a large sensory stream from the continuous world that must be broken up into useful features. The agent must also learn to use its low-level effectors to bring about desired changes in the world. Humans and other animals have adapted to their environment throug ..."
Abstract
- Add to MetaCart
An agent, human or otherwise, receives a large sensory stream from the continuous world that must be broken up into useful features. The agent must also learn to use its low-level effectors to bring about desired changes in the world. Humans and other animals have adapted to their environment through a combination of evolution and individual learning. We blur the distinction between individual and species learning and define the problem abstractly as how can an agent from low-level sensors and effectors learn high-level states and actions through autonomous experience with the environment. Pierce and Kuipers [1997] have shown that an agent can learn the structure of its sensory and motor apparatus. Building on this work, Modayil and Kuipers [2004] have shown how an agent can individuate and track objects in its sensory stream. Our approach builds on this work to enable an agent to learn a discrete sensory description and a hierarchical set of actions. We call our approach the Qualitative Learner of Action and Perception, QLAP [Mugan and Kuipers, 2007; 2008; 2009a]. QLAP learns a discretization of the environment and predictive models of the dynamics of the environment as shown in Figure 1. QLAP assumes that the sensory stream (Fig. 1-a) is converted (Fig. 1-b) to a set of continuous variables. These variables give the locations of objects and distances between them. To build models of the environment, QLAP must learn the necessary discretization. QLAP begins with a very
Overview
"... Computation is everywhere in our modern lives. Currently, the vast majority of non-biological computation is done using discrete symbols that we have created to represent entities in our environment. Dates, sales figures, airport codes–all of these entities were created by humans and are hand coded ..."
Abstract
- Add to MetaCart
(Show Context)
Computation is everywhere in our modern lives. Currently, the vast majority of non-biological computation is done using discrete symbols that we have created to represent entities in our environment. Dates, sales figures, airport codes–all of these entities were created by humans and are hand coded to be easily represented by computers. But in the future, computing will be increasingly applied to the physical world [1]. This computing will often be applied through cyber-physical systems that tightly couple computation and physical resources [2]. This shift will lead to two challenges: (1) handling the explosion of unformatted data, and (2) handling the explosion of system size. The explosion of data will come from increasingly ubiquitous cameras and sensor networks. The explosion of system size will come from the proliferation of robots [3]. As the capability of robots expands, we will need autonomous learning because it will be increasingly difficult to program them in any other way. Essential for handling both types of problems will be the autonomous learning of patterns. Patterns allow learning agents to summarize a large amount of low-level information into one chunk. Patterns can be perceptual patterns or action patterns. A perceptual pattern, once identified, can be pulled out of the large sensory input stream and be used to make predictions. An action pattern can be used to bring about a desired effect in the environment.
Published In Skill Reuse in Lifelong Developmental Learning
"... Abstract — Development requires learning skills using previ-ously learned skills as building blocks. For maximum flexibility, the developing agent should be able to learn these skills without being provided an explicit task or subtasks. We have developed a method that allows an agent to simultaneous ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract — Development requires learning skills using previ-ously learned skills as building blocks. For maximum flexibility, the developing agent should be able to learn these skills without being provided an explicit task or subtasks. We have developed a method that allows an agent to simultaneously learn hierar-chical actions and important distinctions autonomously without being specified a task. This ability to learn new distinctions allows it to work in continuous environments, and allows an agent to learn its first actions from motor primitives. In this paper we demonstrate that our method can use experience from one set of variables to more quickly learn a task that requires additional new variables. It does this by learning actions in a developmental progression. In addition, we demonstrate this developmental progression by showing that actions that are mainly used for exploration when first learned are later used as subactions for other actions. I.