Results 1  10
of
16
Locally weighted learning
 ARTIFICIAL INTELLIGENCE REVIEW
, 1997
"... This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, ass ..."
Abstract

Cited by 594 (53 self)
 Add to MetaCart
This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by tuning t parameters, interference between old and new data, implementing locally weighted learning e ciently, and applications of locally weighted learning. A companion paper surveys how locally weighted learning can be used in robot learning and control.
Locally Weighted Learning for Control
, 1996
"... Lazy learning methods provide useful representations and training algorithms for learning about complex phenomena during autonomous adaptive control of complex systems. This paper surveys ways in which locally weighted learning, a type of lazy learning, has been applied by us to control tasks. We ex ..."
Abstract

Cited by 197 (19 self)
 Add to MetaCart
Lazy learning methods provide useful representations and training algorithms for learning about complex phenomena during autonomous adaptive control of complex systems. This paper surveys ways in which locally weighted learning, a type of lazy learning, has been applied by us to control tasks. We explain various forms that control tasks can take, and how this affects the choice of learning paradigm. The discussion section explores the interesting impact that explicitly remembering all previous experiences has on the problem of learning to control.
Learning Approximation of Feedforward Control Dependence on the . . .
 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION
, 1997
"... This paper presents a new paradigm for modelfree design of a trajectory tracking controller and its experimental implementation in control of a directdrive manipulator. In accordance with the paradigm, a nonlinear approximation for the feedforward control is used. The input to the approximation sc ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
This paper presents a new paradigm for modelfree design of a trajectory tracking controller and its experimental implementation in control of a directdrive manipulator. In accordance with the paradigm, a nonlinear approximation for the feedforward control is used. The input to the approximation scheme are task parameters that define the trajectory to be tracked. The initial data for the approximation is obtained by performing learning control iterations for a number of selected tasks. The paper develops and implements practical approaches to both the approximation and learning control. As the initial feedforward data needs to be obtained for many different tasks, it is important to have fast and robust convergence of the learning control iterations. To satisfy this requirement, we propose a new learning control algorithm based on the online Levenberg Marquardt minimization of a regularized tracking error index. The paper demonstrates an experimental application of the paradigm to trajectory tracking control of fast (1.25 s) motions of a directdrive industrial robot AdeptOne. In our experiments, the learning control converges in five to six iterations for a given set of the task parameters. Radial Basis Function approximation based on the learning results for 45 task parameter vectors brings an average improvement of four times in the tracking accuracy for all motions in the robot workspace. The high performance of the designed approximationbased controller is achieved despite nonlinearity of the system dynamics and large Coulomb friction. The results obtained open an avenue for industrial applications of the proposed approach in robotics and elsewhere.
An Approach to Parametric Nonlinear Least Square Optimization and . . .
, 1997
"... This paper considers a parametric nonlinear least square (NLS) optimization problem. Unlike a classical NLS problem statement, we assume that a nonlinear optimized system depends on two arguments: an input vector and a parameter vector. The input vector can be modified to optimize the system, while ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This paper considers a parametric nonlinear least square (NLS) optimization problem. Unlike a classical NLS problem statement, we assume that a nonlinear optimized system depends on two arguments: an input vector and a parameter vector. The input vector can be modified to optimize the system, while the parameter vector changes from one optimization iteration to another and is not controlled. The optimization process goal is to find a dependence of the optimal input vector on the parameter vector, where the optimal input vector minimizes a quadratic performance index. The paper proposes an extension of the LevenbergMarquardt algorithm for a numerical solution of the formulated problem. The proposed algorithm approximates the nonlinear system in a vicinity of the optimum by expanding it into a series of parameter vector functions, affine in the input vector. In particular, a radial basis function network expansion is considered. The convergence proof for the algorithm is presented. The proposed approach is applied to tasklevel learning control of a twolink flexible arm. Each evaluation of the system in the optimization process means completing a controlled motion of the arm. In the simulation example, the controlled motions take only about 1.5 periods of the lowest eigenfrequency oscillations. The algorithm controls this strongly nonlinear oscillatory system very efficiently. Without any prior knowledge of the system dynamics, it achieves a satisfactory control of arbitrary arm motions after only 500 learning (optimization) iterations.
Magnetic Field Optimization From Limited Data
"... When a sensor coil is placed in the field of an electromagnetic transmitter, a voltage is induced and may be measured. The amplitude of this voltage depends on the distance from the transmitter and the angle between the axes of the transmitter and the sensor. This relationship between the state of t ..."
Abstract
 Add to MetaCart
(Show Context)
When a sensor coil is placed in the field of an electromagnetic transmitter, a voltage is induced and may be measured. The amplitude of this voltage depends on the distance from the transmitter and the angle between the axes of the transmitter and the sensor. This relationship between the state of the coil and the voltages, known as the dipole model, can be exploited to track sensor coils in the space. ElectroMagnetic Articulography (EMA) uses this principle. It consists in measuring and representing graphically the mechanics of speech using sensor coils moved through the magnetic field induced by electromagnetic transmitters. The Carstens AG500 EMA machine aims to provide 3dimensional tracking of coils with 5 degrees of freedom and is used in speech mechanics research. The tracking process relies on optimization algorithms run to minimize the error between the measured voltages and the predicted ones using the dipole model. However, there is evidence to suggest that the dipole model may not match the actual magnetic field and then induces inaccurate tracking. In this project, the feasibility of building a trainable model of the magnetic field is investigated. Using data sets sampled from the dipole model, different neural networks were
ARTICLE Refining Genetically Designed Models for Improved Traffic Prediction on Rural Roads
, 2005
"... (ATIS) for rural roads is limited. However, highway agencies expect to implement intelligent transportation systems (ITS) in both urban and rural areas. In this paper, genetic algorithms (GAs) are used to design both time delay neural network (TDNN) models as well as locally weighted regression (LWR ..."
Abstract
 Add to MetaCart
(Show Context)
(ATIS) for rural roads is limited. However, highway agencies expect to implement intelligent transportation systems (ITS) in both urban and rural areas. In this paper, genetic algorithms (GAs) are used to design both time delay neural network (TDNN) models as well as locally weighted regression (LWR) models to predict shortterm traffic for two rural roads in Alberta, Canada. A topdown refinement was used to study the interactions between modeling techniques and underlying data sets for obtaining highly accurate models. It is found that LWR models achieve faster accuracy improvement than TDNN models over the refinement process. Compared with previous research, the models proposed here show higher accuracy. The average errors for the best LWR models obtained through the modelrefining process are less than 2 % in most cases. For refined TDNN models, the average errors are usually less than 6 /7%. The resulting models indicate a level of high robustness over different types of roads, and thus may be considered desirable for realworld statewide ITS implementations.
QUO VADIS, BAYESIAN IDENTIFICATION?
"... The Bayesian identification of nonlinear, nonGaussian, nonstationary or nonparametric models is notoriously known as computerintensive and not solvable in a closed form. The paper outlines three major approaches to approximate Bayesian estimation, based on locally weighted smoothing of data, ite ..."
Abstract
 Add to MetaCart
The Bayesian identification of nonlinear, nonGaussian, nonstationary or nonparametric models is notoriously known as computerintensive and not solvable in a closed form. The paper outlines three major approaches to approximate Bayesian estimation, based on locally weighted smoothing of data, iterative and noniterative Monte Carlo simulation and direct approximation of an information “distance ” between the empirical and model distributions of data. The informationbased view of estimation is used throughout to give more insight into the methods and show their mutual relationship.
pour obtenir LE GRADE DE DOCTEUR FORMATION DOCTORALE: INGÉNIERIE INFORMATIQUE PAR tel00825854, version 1
"... Modélisation et apprentissage des préférences par réseaux de neurones pour l’aide à la décision multicritère Soutenue le 15 Mars 1996 devant la commission d’Examen ..."
Abstract
 Add to MetaCart
(Show Context)
Modélisation et apprentissage des préférences par réseaux de neurones pour l’aide à la décision multicritère Soutenue le 15 Mars 1996 devant la commission d’Examen
CONTRIBUTED ARTICLE Precision Requirements for ClosedLoop Kinematic Robotic Control Using Linear Local Mappings
, 1996
"... Abstract—Neural networks are approximation techniques that can be characterized by adaptability rather than by precision. For feedback systems, high precision can still be acquired in presence of errors. Within a general iterative framework of closedloop kinematic robotic control using linear local ..."
Abstract
 Add to MetaCart
Abstract—Neural networks are approximation techniques that can be characterized by adaptability rather than by precision. For feedback systems, high precision can still be acquired in presence of errors. Within a general iterative framework of closedloop kinematic robotic control using linear local modeling, the inverse Jacobian matrix error and the maximum length of the displacement for which the linear model is valid are computed. They guarantee convergence of the feedback loop. The error bounds are computed for our manipulator. The theoretical results are validated by
unknown title
, 1996
"... Lazy learning methods provide useful representations and training algorithms for learning about complex phenomena during autonomous adaptive control of complex systems. This paper surveys ways in which locally weighted learning, a type of lazy learning, has been applied by ustocontrol tasks. We expl ..."
Abstract
 Add to MetaCart
Lazy learning methods provide useful representations and training algorithms for learning about complex phenomena during autonomous adaptive control of complex systems. This paper surveys ways in which locally weighted learning, a type of lazy learning, has been applied by ustocontrol tasks. We explain various forms that control tasks can take, and how this a�ects the choice of learning paradigm. The discussion section explores the interesting impact that explicitly remembering all previous experiences has on the problem of learning to control.