Results 1  10
of
543
Locally weighted learning
 ARTIFICIAL INTELLIGENCE REVIEW
, 1997
"... This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, ass ..."
Abstract

Cited by 596 (53 self)
 Add to MetaCart
This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by tuning t parameters, interference between old and new data, implementing locally weighted learning e ciently, and applications of locally weighted learning. A companion paper surveys how locally weighted learning can be used in robot learning and control.
A New Location Technique for the Active Office
, 1997
"... Configuration of the computing and communications systems found at home and in the workplace is a complex task that currently requires the attention of the user. Recently, researchers have begun to examine computers that would autonomously change their functionality based on observations of who or ..."
Abstract

Cited by 512 (4 self)
 Add to MetaCart
Configuration of the computing and communications systems found at home and in the workplace is a complex task that currently requires the attention of the user. Recently, researchers have begun to examine computers that would autonomously change their functionality based on observations of who or what was around them. By determining their context, using input from sensor systems distributed throughout the environment, computing devices could personalize themselves to their current user, adapt their behavior according to their location, or react to their surroundings. The authors present a novel sensor system, suitable for largescale deployment in indoor environments, which allows the locations of people and equipment to be accurately determined. We also describe some of the contextaware applications that might make use of this finegrained location information.
Constructive Incremental Learning from Only Local Information
, 1998
"... ... This article illustrates the potential learning capabilities of purely local learning and offers an interesting and powerful approach to learning with receptive fields. ..."
Abstract

Cited by 206 (39 self)
 Add to MetaCart
(Show Context)
... This article illustrates the potential learning capabilities of purely local learning and offers an interesting and powerful approach to learning with receptive fields.
Incremental Online Learning in High Dimensions
 Neural Computation
, 2005
"... Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally e ..."
Abstract

Cited by 162 (18 self)
 Add to MetaCart
Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally e#cient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it i) learns rapidly with second order learning methods based on incremental training, ii) uses statistically sound stochastic leaveoneout cross validation for learning without the need to memorize training data, iii) adjusts its weighting kernels based only on local information in order to minimize the danger of negative interference of incremental learning, iv) has a computational complexity that is linear in the number of inputs, and v) can deal with a large number of  possibly redundant  inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and e#ciently operate in very high dimensional spaces.
Neural Networks and Statistical Models
, 1994
"... There has been much publicity about the ability of artificial neural networks to learn and generalize. In fact, the most commonly used artificial neural networks, called multilayer perceptrons, are nothing more than nonlinear regression and discriminant models that can be implemented with standard s ..."
Abstract

Cited by 137 (1 self)
 Add to MetaCart
There has been much publicity about the ability of artificial neural networks to learn and generalize. In fact, the most commonly used artificial neural networks, called multilayer perceptrons, are nothing more than nonlinear regression and discriminant models that can be implemented with standard statistical software. This paper explains what neural networks are, translates neural network jargon into statistical jargon, and shows the relationships between neural networks and statistical models such as generalized linear models, maximum redundancy analysis, projection pursuit, and cluster analysis.
A dimensional analysis of the relationship between psychological empowerment and effectiveness, 22 يتخاىطواير يزاسذىمواًت ٍطبار يبرجت ٍعلاطم...راتفر غًلب ي satisfaction and strain
 Journal of Management
, 1997
"... Hong Kong Universig qfScience and Technolng) This paper examines the contribution of each of the four dimensions in Thomas and Velthouse’s (1990) multidimensional conceptualization of psychological empowerment in predicting three expected outcomes of empowerment: effectiveness, work satisfaction, ..."
Abstract

Cited by 78 (2 self)
 Add to MetaCart
(Show Context)
Hong Kong Universig qfScience and Technolng) This paper examines the contribution of each of the four dimensions in Thomas and Velthouse’s (1990) multidimensional conceptualization of psychological empowerment in predicting three expected outcomes of empowerment: effectiveness, work satisfaction, and jobrelated strain. The literature on the four dimensions of empowerment (i.e., meaning, competence, selfdetermination, and impact) is reviewed and theoretical logic is developed linking the dimensions to specific outcomes. The expected relationships are tested on a sample of managers from diverse units of a manufacturing organization and then replicated on an independent sample of lowerlevel employees in a service organization using alternative measures of the outcome variables. The results, largely consistent across the two samples, suggest that dcrerent dimensions are related to different outcomes and that no single dimension predicts all three outcomes. These results indicate that employees need to experience each of the empowerment dimensions in order to achieve all of the hoped for outcomes of empowerment.
Ranking a Random Feature For Variable And Feature Selection
 JOURNAL OF MACHINE LEARNING RESEARCH 3 (2003) 13991414
, 2003
"... We describe a feature selection method that can be applied directly to models that are linear with respect to their parameters, and indirectly to others. It is independent of the target machine. It is closely related to classical statistical hypothesis tests, but it is more intuitive, hence more s ..."
Abstract

Cited by 63 (9 self)
 Add to MetaCart
We describe a feature selection method that can be applied directly to models that are linear with respect to their parameters, and indirectly to others. It is independent of the target machine. It is closely related to classical statistical hypothesis tests, but it is more intuitive, hence more suitable for use by engineers who are not statistics experts. Furthermore, some assumptions of classical tests are relaxed. The method has been used successfully in a number of applications that are briefly described.
Forward models in visuomotor control
 Journal of Neurophysiology
, 2002
"... Abstract: In recent years, an increasing number of research projects investigated whether the central nervous system employs internal models in motor control. While inverse models in the control loop can be identified more readily in both motor behavior and the firing of single neurons, providing di ..."
Abstract

Cited by 63 (3 self)
 Add to MetaCart
(Show Context)
Abstract: In recent years, an increasing number of research projects investigated whether the central nervous system employs internal models in motor control. While inverse models in the control loop can be identified more readily in both motor behavior and the firing of single neurons, providing direct evidence for the existence of forward models is more complicated. In this paper, we will discuss such an identification of forward models in the context of the visuomotor control of an unstable dynamic system, the balancing of a pole on a finger. Pole balancing imposes stringent constraints on the biological controller, as it needs to cope with the large delays of visual information processing while keeping the pole at an unstable equilibrium. We hypothesize various modelbased and nonmodelbased control schemes of how visuomotor control can be accomplished in this task, including Smith Predictors, predictors with Kalman filters, tappeddelay line control, and delayuncompensated control. Behavioral experiments with human participants allow exclusion of most of the hypothesized control schemes. In the end, our data supports the existence of a forward model in the sensory preprocessing loop of control. As an important part of our research, we will provide a discussion of when and how forward models can be identified, and also the possible pitfalls in the search for forward models in control.
Development of quantitative structureactivity relationships and its application in rational drug design
"... Abstract—Quantitative structureactivity relationships are mathematical models constructed based on the hypothesis that structure of chemical compounds is related to their biological activity. A linear regression model is often used to estimate and/or to predict the nature of the relationship betwe ..."
Abstract

Cited by 54 (1 self)
 Add to MetaCart
Abstract—Quantitative structureactivity relationships are mathematical models constructed based on the hypothesis that structure of chemical compounds is related to their biological activity. A linear regression model is often used to estimate and/or to predict the nature of the relationship between a measured activity and some measure or calculated descriptors. Linear regression helps to answer main three questions: does the biological activity depend on structure information; if so, the nature of the relationship is linear; and if yes, how good is the model in prediction of the biological activity of new compound(s). This manuscript presents the steps on linear regression analysis moving from theoretical knowledge to an example conducted on sets of endocrine disrupting chemicals. Keywordsrobust regression; validation; diagnostic; pre
Sparse modelling using orthogonal forward regression with press statistic and regularization
 IEEE TRANS. SYSTEMS, MAN AND CYBERNETICS, PART B
, 2004
"... The paper introduces an efficient construction algorithm for obtaining sparse linearintheweights regression models based on an approach of directly optimizing model generalization capability. This is achieved by utilizing the delete1 cross validation concept and the associated leaveoneout tes ..."
Abstract

Cited by 48 (23 self)
 Add to MetaCart
The paper introduces an efficient construction algorithm for obtaining sparse linearintheweights regression models based on an approach of directly optimizing model generalization capability. This is achieved by utilizing the delete1 cross validation concept and the associated leaveoneout test error also known as the predicted residual sums of squares (PRESS) statistic, without resorting to any other validation data set for model evaluation in the model construction process. Computational efficiency is ensured using an orthogonal forward regression, but the algorithm incrementally minimizes the PRESS statistic instead of the usual sum of the squared training errors. A local regularization method can naturally be incorporated into the model selection procedure to further enforce model sparsity. The proposed algorithm is fully automatic, and the user is not required to specify any criterion to terminate the model construction procedure. Comparisons with some of the existing stateofart modeling methods are given, and several examples are included to demonstrate the ability of the proposed algorithm to effectively construct sparse models that generalize well.