• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 255
Next 10 →

Is imitation learning the route to humanoid robots?

by Stefan Schaal - TRENDS IN COGNITIVE SCIENCES , 1999
"... This review investigates two recent developments in artificial intelligence and neural computation: learning from imitation and the development of humanoid robots. It is postulated that the study of imitation learning offers a promising route to gain new insights into mechanisms of perceptual motor ..."
Abstract - Cited by 308 (18 self) - Add to MetaCart
of imitation. Computational approaches to imitation learning are also described, initially from the perspective of traditional AI and robotics, but also from the perspective of neural network models and statistical-learning research. Parallels and differences between biological and computational approaches

A Context-Dependent Attention System for a Social Robot

by Cynthia Breazeal , Brian Scassellati , 1999
"... This paper presents part of an on-going project to integrate perception, attention, drives, emotions, behavior arbitration, and expressive acts for a robot designed to interact socially with humans. We present the design of a visual attention system based on a model of human visual search beha ..."
Abstract - Cited by 181 (24 self) - Add to MetaCart
This paper presents part of an on-going project to integrate perception, attention, drives, emotions, behavior arbitration, and expressive acts for a robot designed to interact socially with humans. We present the design of a visual attention system based on a model of human visual search

Grounded Situation Models for Robots: Where words and percepts meet

by Nikolaos Mavridis, Deb Roy - In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS , 2006
"... Our long-term objective is to develop robots that engage in natural language-mediated cooperative tasks with humans. To support this goal, we are developing an amodal representation and associated processes which is called a grounded situation model (GSM). We are also developing a modular architectu ..."
Abstract - Cited by 54 (3 self) - Add to MetaCart
Our long-term objective is to develop robots that engage in natural language-mediated cooperative tasks with humans. To support this goal, we are developing an amodal representation and associated processes which is called a grounded situation model (GSM). We are also developing a modular

Grounding Language in Perception for Scene Conceptualization in Autonomous Robots

by Krishna S. R. Dubba, Miguel R. De Oliveira, Gi Hyun Lim, Hamidreza Kasaei, Luı́s Seabra Lopes, Anthony G. Cohn
"... In order to behave autonomously, it is desirable for robots to have the ability to use human supervision and learn from different input sources (perception, gestures, verbal and textual descriptions etc). In many machine learning tasks, the supervision is directed specifically towards machines and h ..."
Abstract - Add to MetaCart
and simultaneously visual object models it was not trained on. In this paper, we present a cognitive architecture and learning framework for robot learning through natural human supervision and using multiple input sources by grounding language in per-ception.

State-of-the-Art in visual attention Modeling

by Ali Borji, Laurent Itti - IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2010
"... Modeling visual attention — particularly stimulus-driven, saliency-based attention — has been a very active research area over the past 25 years. Many different models of attention are now available, which aside from lending theoretical contributions to other fields, have demonstrated successful ap ..."
Abstract - Cited by 99 (8 self) - Add to MetaCart
Modeling visual attention — particularly stimulus-driven, saliency-based attention — has been a very active research area over the past 25 years. Many different models of attention are now available, which aside from lending theoretical contributions to other fields, have demonstrated successful

From unknown sensors and actuators to actions grounded in sensorimotor perceptions

by Lars Olsson, Chrystopher L. Nehaniv, Daniel Polani - Connection Science , 2006
"... This article describes a developmental system based on information theory implemented on a real robot that learns a model of its own sensory and actuator apparatus. There is no innate knowledge regarding the modalities or representation of the sensory input and the actuators, and the system relies o ..."
Abstract - Cited by 45 (5 self) - Add to MetaCart
This article describes a developmental system based on information theory implemented on a real robot that learns a model of its own sensory and actuator apparatus. There is no innate knowledge regarding the modalities or representation of the sensory input and the actuators, and the system relies

On Seeing Robots

by Alan K. Mackworth - Computer Vision: Systems, Theory, and Applications , 1992
"... Good Old Fashioned Artificial Intelligence and Robotics (GOFAIR) relies on a set of restrictive Omniscient Fortune Teller Assumptions about the agent, the world and their relationship. The emerging Situated Agent paradigm is challenging GOFAIR by grounding the agent in space and time, relaxing so ..."
Abstract - Cited by 49 (19 self) - Add to MetaCart
Good Old Fashioned Artificial Intelligence and Robotics (GOFAIR) relies on a set of restrictive Omniscient Fortune Teller Assumptions about the agent, the world and their relationship. The emerging Situated Agent paradigm is challenging GOFAIR by grounding the agent in space and time, relaxing

Feature Binding Through Temporally Correlated Neural Activity in a Robot Model of Visual Perception

by Steffen Egner, Christian Scheier
"... . An agent performing a task in an environment must be able to selectively attend to visual stimuli. This ability is of critical importance for adaptive behavior in (vision-based) biological and artificial agents. In this paper we present a connectionist model of how visual attention can serve an ag ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
. An agent performing a task in an environment must be able to selectively attend to visual stimuli. This ability is of critical importance for adaptive behavior in (vision-based) biological and artificial agents. In this paper we present a connectionist model of how visual attention can serve

A corpus-guided framework for robotic visual perception

by Ching L. Teo, Yezhou Yang, Hal Daume ́ Iii, Yiannis Aloimonos - Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence , 2011
"... We present a framework that produces sentence-level summa-rizations of videos containing complex human activities that can be implemented as part of the Robot Perception Control Unit (RPCU). This is done via: 1) detection of pertinent ob-jects in the scene: tools and direct-objects, 2) predicting ac ..."
Abstract - Cited by 2 (1 self) - Add to MetaCart
We present a framework that produces sentence-level summa-rizations of videos containing complex human activities that can be implemented as part of the Robot Perception Control Unit (RPCU). This is done via: 1) detection of pertinent ob-jects in the scene: tools and direct-objects, 2) predicting

Coupling Perception and Simulation: Steps Towards Conversational Robotics

by Kai-Yuh Hsiao , Nikolaos Mavridis, Deb Roy - IN PROCEEDINGS OF IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS , 2003
"... Human cognition makes extensive use of visualization and imagination. As a first step towards giving a robot similar abilities, we have built a robotic system that uses a perceptually-coupled physical simulator to produce an internal world model of the robot's environment. Real-time perceptual ..."
Abstract - Cited by 29 (18 self) - Add to MetaCart
Human cognition makes extensive use of visualization and imagination. As a first step towards giving a robot similar abilities, we have built a robotic system that uses a perceptually-coupled physical simulator to produce an internal world model of the robot's environment. Real-time perceptual
Next 10 →
Results 1 - 10 of 255
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University