Results 1 
6 of
6
Locally weighted learning
 ARTIFICIAL INTELLIGENCE REVIEW
, 1997
"... This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, ass ..."
Abstract

Cited by 594 (53 self)
 Add to MetaCart
This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by tuning t parameters, interference between old and new data, implementing locally weighted learning e ciently, and applications of locally weighted learning. A companion paper surveys how locally weighted learning can be used in robot learning and control.
On Automatic Boundary Corrections
 Annals of Statistics
, 1996
"... Many popular curve estimators based on smoothing have difficulties caused by boundary effects. These effects are visually disturbing in practice and can play a dominant role in theoretical analysis. Local polynomial regression smoothers are known to correct boundary effects automatically. Some analo ..."
Abstract

Cited by 48 (2 self)
 Add to MetaCart
Many popular curve estimators based on smoothing have difficulties caused by boundary effects. These effects are visually disturbing in practice and can play a dominant role in theoretical analysis. Local polynomial regression smoothers are known to correct boundary effects automatically. Some analogs are implemented for density estimation and the resulting estimators also achieve automatic boundary corrections. In both settings of density and regression estimation, we investigate best weight functions for local polynomial fitting at the endpoints and find a simple solution. The solution is universal for general degree of local polynomial fitting and general order of estimated derivative. Furthermore, such local polynomial estimators are best among all linear estimators in a weak minimax sense. And they are highly efficient even in the usual linear minimax sense. 0 This research is part of MingYen Cheng's dissertation under the supervision of Professors J. Fan and J. S. Marron at th...
DOES MEDIAN FILTERING TRULY PRESERVE EDGES BETTER THAN LINEAR FILTERING?
, 2006
"... Image processing researchers commonly assert that “median filtering is better than linear filtering for removing noise in the presence of edges. ” Using a straightforward largen decisiontheory framework, this folktheorem is seen to be false in general. We show that median filtering and linear fil ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
(Show Context)
Image processing researchers commonly assert that “median filtering is better than linear filtering for removing noise in the presence of edges. ” Using a straightforward largen decisiontheory framework, this folktheorem is seen to be false in general. We show that median filtering and linear filtering have similar asymptotic worstcase meansquared error (MSE) when the signaltonoise ratio (SNR) is of order 1, which corresponds to the case of constant perpixel noise level in a digital signal. To see dramatic benefits of median smoothing in an asymptotic setting, the perpixel noise level should tend to zero (i.e., SNR should grow very large). We show that a twostage median filtering using two very different window widths can dramatically outperform traditional linear and median filtering in settings where the underlying object has edges. In this twostage procedure, the first pass, at a fine scale, aims at increasing the SNR. The second pass, at a coarser scale, correctly
Minimax efficiency of local polynomial fit estimators at boundaries
 University of North Carolina at Chapel Hill
, 1993
"... ..."
Weighted repeated median smoothing and filtering
, 2005
"... We propose weighted repeated median ¯lters and smoothers for robust nonparametric regression in general and for robust signal extraction from time series in particular. The proposed methods allow to remove outlying sequences and to preserve discontinuities (shifts) in the underlying regression fun ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
We propose weighted repeated median ¯lters and smoothers for robust nonparametric regression in general and for robust signal extraction from time series in particular. The proposed methods allow to remove outlying sequences and to preserve discontinuities (shifts) in the underlying regression function (the signal) in the presence of local linear trends. Suitable weighting of the observations according to their distances in the design space reduces the bias arising from nonlinearities. It also allows to improve the e±ciency of (unweighted) repeated median ¯lters using larger bandwidths, keeping their properties for distinguishing between outlier sequences and longterm shifts. Robust smoothers based on weighted L1regression are included for the reason of comparison.
Nonparametric Regression Under Dependent Errors With Infinite Variance
"... We consider local least absolute deviation (LLAD) estimation for trend functions of time series with heavy tails which are characterised via a symmetric stable law distribution. The setting includes both causal stable ARMA model and fractional stable ARIMA model as special cases. The asymptotic li ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We consider local least absolute deviation (LLAD) estimation for trend functions of time series with heavy tails which are characterised via a symmetric stable law distribution. The setting includes both causal stable ARMA model and fractional stable ARIMA model as special cases. The asymptotic limit of the estimator is established under the assumption that the process has either short or long memory autocorrelation. For a short memory process, the estimator admits the same convergence rate as if the process has the finite variance. The optimal rate of convergence n−2/5 is obtainable by using appropriate bandwidths. This is distinctly different from local least squares estimation, of which the convergence is slowed down due to the existence of heavy tails. On the other hand, the rate of convergence of the LLAD estimator for a long memory process is always slower than n −2/5 and the limit is no longer normal. 1