Results 1 
7 of
7
Locally weighted learning
 ARTIFICIAL INTELLIGENCE REVIEW
, 1997
"... This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, ass ..."
Abstract

Cited by 594 (53 self)
 Add to MetaCart
This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by tuning t parameters, interference between old and new data, implementing locally weighted learning e ciently, and applications of locally weighted learning. A companion paper surveys how locally weighted learning can be used in robot learning and control.
MemoryBased Neural Networks For Robot Learning
 Neurocomputing
, 1995
"... This paper explores a memorybased approach to robot learning, using memorybased neural networks to learn models of the task to be performed. Steinbuch and Taylor presented neural network designs to explicitly store training data and do nearest neighbor lookup in the early 1960s. In this paper their ..."
Abstract

Cited by 31 (8 self)
 Add to MetaCart
(Show Context)
This paper explores a memorybased approach to robot learning, using memorybased neural networks to learn models of the task to be performed. Steinbuch and Taylor presented neural network designs to explicitly store training data and do nearest neighbor lookup in the early 1960s. In this paper their nearest neighbor network is augmented with a local model network, which fits a local model to a set of nearest neighbors. This network design is equivalent to a statistical approach known as locally weighted regression, in which a local model is formed to answer each query, using a weighted regression in which nearby points (similar experiences) are weighted more than distant points (less relevant experiences). We illustrate this approach by describing how it has been used to enable a robot to learn a difficult juggling task. Keywords: memorybased, robot learning, locally weighted regression, nearest neighbor, local models. 1 Introduction An important problem in motor learning is approxim...
Nonparametric Regression for Learning Nonlinear Transformations
 PRERATIONAL INTELLIGENCE IN STRATEGIES, HIGHLEVEL PROCESSES AND COLLECTIVE BEHAVIOR
"... Information processing in animals and artificial movement systems consists of a series of transformations that map sensory signals to intermediate representations, and finally to motor commands. Given the physical and neuroanatomical differences between individuals and the need for plasticity during ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Information processing in animals and artificial movement systems consists of a series of transformations that map sensory signals to intermediate representations, and finally to motor commands. Given the physical and neuroanatomical differences between individuals and the need for plasticity during development, it is highly likely that such transformations are learned rather than preprogrammed by evolution. Such selforganizing processes, capable of discovering nonlinear dependencies between different groups of signals, are one essential part of prerational intelligence. While neural network algorithms seem to be the natural choice when searching for solutions for learning transformations, this paper will take a more careful look at which types of neural networks are actually suited for the requirements of an autonomous learning system. The approach that we will pursue is guided by recent developments in learning theory that have linked neural network learning to well established statistical theories. In particular, this new statistical understanding has given rise to the development of neural network systems that are directly based on statistical methods. One family of such methods stems from nonparametric regression. This paper will compare nonparametric learning with the more widely used parametric counterparts in a non technical fashion, and investigate how these two families differ in their properties and their applicabilities. We will argue that nonparametric neural networks offer a set of characteristics that make them a very promising candidate for online learning in autonomous system.
Nonparametric regression for learning
 CENTER FOR INTERDISCIPLINARY RESEARCH, UNIVERSITY OF BIELEFELD
, 1994
"... In recent years, learning theory has been increasingly influenced by the fact that many learning algorithms have at least in part a comprehensive interpretation in terms of well established statistical theories. Furthermore, with little modification, several statistical methods can be directly cast ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In recent years, learning theory has been increasingly influenced by the fact that many learning algorithms have at least in part a comprehensive interpretation in terms of well established statistical theories. Furthermore, with little modification, several statistical methods can be directly cast into learning algorithms. One family of such methods stems from nonparametric regression. This paper compares nonparametric learning with the more widely used parametric counterparts and investigates how these two families differ in their properties and their applicability.
Largesignal . . . nonlinear devices using scattering parameters
, 2002
"... Characterization and modeling of devices at high drive levels often requires specialized equipment and measurement techniques. Many largesignal devices will never have traditional nonlinear models because model development is expensive and timeconsuming. Due to the complexity of the device or the s ..."
Abstract
 Add to MetaCart
Characterization and modeling of devices at high drive levels often requires specialized equipment and measurement techniques. Many largesignal devices will never have traditional nonlinear models because model development is expensive and timeconsuming. Due to the complexity of the device or the size of the application market, nonlinear modeling efforts may not be cost effective. Scattering parameters, widely used for smallsignal passive and active device characterization, have received only cursory consideration for largesignal nonlinear device characterization due to technical and theoretical issues. We review the theory ofparameters, active device characterization, and previous efforts to use parameters with largesignal nonlinear devices. A robust, calibrated vectormeasurement system is used to obtain device scattering parameters as a function of drive level. The unique measurement system architecture allows meaningful scattering parameter measurements of largesignal nonlinear devices, overcoming limitations reported by previous researchers. A threeport parameter device model, with a nonlinear reflection coefficient terminating the third port, can be extracted from scattering parameters measured as a function of drive level. This threeport model provides excellent agreement with device measurements across a wide range of drive conditions. The model is used to simulate loadpull data for various drive levels which are compared to measured data.
Acoustic to Articulatory Mapping using Memory Based Regression and Trajectory Smoothing
"... This paper investigates the memory based regression for the task of acoustictoarticulatory inversion. In memory based regression, a local model is built for each test sample using the k nearest neighbors, and an articulatory vector is estimated. In this paper we investigates different regression ..."
Abstract
 Add to MetaCart
(Show Context)
This paper investigates the memory based regression for the task of acoustictoarticulatory inversion. In memory based regression, a local model is built for each test sample using the k nearest neighbors, and an articulatory vector is estimated. In this paper we investigates different regression models, from the basic Average estimate, to a polynomial estimate. The paper also investigates the importance of optimizing the number of nearest neighbors (k) on the error function. For this purpose, we use a neural network to estimate the number of neighbors used for regression. This method is used to estimate the articulatory values, and their first two derivatives. The values and their derivatives are then used on an utterance level to temporally smooth and estimated articulatory parameters using the Trajectory Likelihood Maximization method. We apply this method on the MOCHA database, which consists of 167000 acousticarticulatory frames. These steps combined, and using 15 fold cross validation on MOCHA, show that memory based regression with optimizing the number of neighbors, followed by trajectory smoothing can decrease the Mean Square Error beyond the stateoftheart methods.