Results 1  10
of
196
Locally weighted learning
 ARTIFICIAL INTELLIGENCE REVIEW
, 1997
"... This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, ass ..."
Abstract

Cited by 448 (52 self)
 Add to MetaCart
This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by tuning t parameters, interference between old and new data, implementing locally weighted learning e ciently, and applications of locally weighted learning. A companion paper surveys how locally weighted learning can be used in robot learning and control.
A New Location Technique for the Active Office
, 1997
"... Configuration of the computing and communications systems found at home and in the workplace is a complex task that currently requires the attention of the user. Recently, researchers have begun to examine computers that would autonomously change their functionality based on observations of who or ..."
Abstract

Cited by 390 (4 self)
 Add to MetaCart
Configuration of the computing and communications systems found at home and in the workplace is a complex task that currently requires the attention of the user. Recently, researchers have begun to examine computers that would autonomously change their functionality based on observations of who or what was around them. By determining their context, using input from sensor systems distributed throughout the environment, computing devices could personalize themselves to their current user, adapt their behavior according to their location, or react to their surroundings. The authors present a novel sensor system, suitable for largescale deployment in indoor environments, which allows the locations of people and equipment to be accurately determined. We also describe some of the contextaware applications that might make use of this finegrained location information.
Constructive Incremental Learning from Only Local Information
, 1998
"... ... This article illustrates the potential learning capabilities of purely local learning and offers an interesting and powerful approach to learning with receptive fields. ..."
Abstract

Cited by 160 (37 self)
 Add to MetaCart
... This article illustrates the potential learning capabilities of purely local learning and offers an interesting and powerful approach to learning with receptive fields.
Incremental Online Learning in High Dimensions
 Neural Computation
, 2005
"... Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally e ..."
Abstract

Cited by 104 (15 self)
 Add to MetaCart
Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally e#cient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it i) learns rapidly with second order learning methods based on incremental training, ii) uses statistically sound stochastic leaveoneout cross validation for learning without the need to memorize training data, iii) adjusts its weighting kernels based only on local information in order to minimize the danger of negative interference of incremental learning, iv) has a computational complexity that is linear in the number of inputs, and v) can deal with a large number of  possibly redundant  inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and e#ciently operate in very high dimensional spaces.
Neural Networks and Statistical Models
, 1994
"... There has been much publicity about the ability of artificial neural networks to learn and generalize. In fact, the most commonly used artificial neural networks, called multilayer perceptrons, are nothing more than nonlinear regression and discriminant models that can be implemented with standard s ..."
Abstract

Cited by 99 (1 self)
 Add to MetaCart
There has been much publicity about the ability of artificial neural networks to learn and generalize. In fact, the most commonly used artificial neural networks, called multilayer perceptrons, are nothing more than nonlinear regression and discriminant models that can be implemented with standard statistical software. This paper explains what neural networks are, translates neural network jargon into statistical jargon, and shows the relationships between neural networks and statistical models such as generalized linear models, maximum redundancy analysis, projection pursuit, and cluster analysis.
Ranking a Random Feature For Variable And Feature Selection
 JOURNAL OF MACHINE LEARNING RESEARCH 3 (2003) 13991414
, 2003
"... We describe a feature selection method that can be applied directly to models that are linear with respect to their parameters, and indirectly to others. It is independent of the target machine. It is closely related to classical statistical hypothesis tests, but it is more intuitive, hence more s ..."
Abstract

Cited by 41 (7 self)
 Add to MetaCart
We describe a feature selection method that can be applied directly to models that are linear with respect to their parameters, and indirectly to others. It is independent of the target machine. It is closely related to classical statistical hypothesis tests, but it is more intuitive, hence more suitable for use by engineers who are not statistics experts. Furthermore, some assumptions of classical tests are relaxed. The method has been used successfully in a number of applications that are briefly described.
Forward models in visuomotor control
 Journal of Neurophysiology
, 2002
"... Abstract: In recent years, an increasing number of research projects investigated whether the central nervous system employs internal models in motor control. While inverse models in the control loop can be identified more readily in both motor behavior and the firing of single neurons, providing di ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
Abstract: In recent years, an increasing number of research projects investigated whether the central nervous system employs internal models in motor control. While inverse models in the control loop can be identified more readily in both motor behavior and the firing of single neurons, providing direct evidence for the existence of forward models is more complicated. In this paper, we will discuss such an identification of forward models in the context of the visuomotor control of an unstable dynamic system, the balancing of a pole on a finger. Pole balancing imposes stringent constraints on the biological controller, as it needs to cope with the large delays of visual information processing while keeping the pole at an unstable equilibrium. We hypothesize various modelbased and nonmodelbased control schemes of how visuomotor control can be accomplished in this task, including Smith Predictors, predictors with Kalman filters, tappeddelay line control, and delayuncompensated control. Behavioral experiments with human participants allow exclusion of most of the hypothesized control schemes. In the end, our data supports the existence of a forward model in the sensory preprocessing loop of control. As an important part of our research, we will provide a discussion of when and how forward models can be identified, and also the possible pitfalls in the search for forward models in control.
Sparse modelling using orthogonal forward regression with press statistic and regularization
 IEEE TRANS. SYSTEMS, MAN AND CYBERNETICS, PART B
, 2004
"... The paper introduces an efficient construction algorithm for obtaining sparse linearintheweights regression models based on an approach of directly optimizing model generalization capability. This is achieved by utilizing the delete1 cross validation concept and the associated leaveoneout tes ..."
Abstract

Cited by 28 (8 self)
 Add to MetaCart
The paper introduces an efficient construction algorithm for obtaining sparse linearintheweights regression models based on an approach of directly optimizing model generalization capability. This is achieved by utilizing the delete1 cross validation concept and the associated leaveoneout test error also known as the predicted residual sums of squares (PRESS) statistic, without resorting to any other validation data set for model evaluation in the model construction process. Computational efficiency is ensured using an orthogonal forward regression, but the algorithm incrementally minimizes the PRESS statistic instead of the usual sum of the squared training errors. A local regularization method can naturally be incorporated into the model selection procedure to further enforce model sparsity. The proposed algorithm is fully automatic, and the user is not required to specify any criterion to terminate the model construction procedure. Comparisons with some of the existing stateofart modeling methods are given, and several examples are included to demonstrate the ability of the proposed algorithm to effectively construct sparse models that generalize well.
MemoryBased Neural Networks For Robot Learning
 Neurocomputing
, 1995
"... This paper explores a memorybased approach to robot learning, using memorybased neural networks to learn models of the task to be performed. Steinbuch and Taylor presented neural network designs to explicitly store training data and do nearest neighbor lookup in the early 1960s. In this paper their ..."
Abstract

Cited by 26 (8 self)
 Add to MetaCart
This paper explores a memorybased approach to robot learning, using memorybased neural networks to learn models of the task to be performed. Steinbuch and Taylor presented neural network designs to explicitly store training data and do nearest neighbor lookup in the early 1960s. In this paper their nearest neighbor network is augmented with a local model network, which fits a local model to a set of nearest neighbors. This network design is equivalent to a statistical approach known as locally weighted regression, in which a local model is formed to answer each query, using a weighted regression in which nearby points (similar experiences) are weighted more than distant points (less relevant experiences). We illustrate this approach by describing how it has been used to enable a robot to learn a difficult juggling task. Keywords: memorybased, robot learning, locally weighted regression, nearest neighbor, local models. 1 Introduction An important problem in motor learning is approxim...
Detection of consistently taskrelated activation in fMRI data with hybrid independent component analysis. NeurImage 2000;11:24–35
"... fMRI data are commonly analyzed by testing the time course from each voxel against specific hypothesized waveforms, despite the fact that many components of fMRI signals are difficult to specify explicitly. In contrast, purely datadriven techniques, by focusing on the intrinsic structure of the dat ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
fMRI data are commonly analyzed by testing the time course from each voxel against specific hypothesized waveforms, despite the fact that many components of fMRI signals are difficult to specify explicitly. In contrast, purely datadriven techniques, by focusing on the intrinsic structure of the data, lack a direct means to test hypotheses of interest to the examiner. Between these two extremes, there is a role for hybrid methods that use powerful datadriven techniques to fully characterize the data, but also use some a priori hypotheses to guide the analysis. Here we describe such a hybrid technique, HYBICA, which uses the initial characterization of the fMRI data from Independent Component Analysis and allows the experimenter to sequentially combine assumed taskrelated components so that one can gracefully navigate from a fully dataderived approach to a fully hypothesisdriven approach. We describe the results of testing the method with two artificial and two real data sets. A metric based on the diagnostic Predicted Sum of Squares statistic was used to select the best number of spatially independent components to combine and utilize in a standard regressional framework. The proposed metric provided an objective method to determine whether a more datadriven or a more hypothesisdriven approach was appropriate, depending on the degree of mismatch between the hypothesized reference function and the features in the data. HYBICA provides a robust way to combine the dataderived independent components into a dataderived activation waveform and suitable confounds so that standard statistical analysis can be performed. � 2000 Academic Press Key Words: linear regression; independent component analysis; data decomposition; functional magnetic resonance imaging