Results 1 
4 of
4
Ideal spatial adaptation by wavelet shrinkage
 Biometrika
, 1994
"... With ideal spatial adaptation, an oracle furnishes information about how best to adapt a spatially variable estimator, whether piecewise constant, piecewise polynomial, variable knot spline, or variable bandwidth kernel, to the unknown function. Estimation with the aid of an oracle o ers dramatic ad ..."
Abstract

Cited by 838 (4 self)
 Add to MetaCart
With ideal spatial adaptation, an oracle furnishes information about how best to adapt a spatially variable estimator, whether piecewise constant, piecewise polynomial, variable knot spline, or variable bandwidth kernel, to the unknown function. Estimation with the aid of an oracle o ers dramatic advantages over traditional linear estimation by nonadaptive kernels � however, it is a priori unclear whether such performance can be obtained by a procedure relying on the data alone. We describe a new principle for spatiallyadaptive estimation: selective wavelet reconstruction. Weshowthatvariableknot spline ts and piecewisepolynomial ts, when equipped with an oracle to select the knots, are not dramatically more powerful than selective wavelet reconstruction with an oracle. We develop a practical spatially adaptive method, RiskShrink, which works by shrinkage of empirical wavelet coe cients. RiskShrink mimics the performance of an oracle for selective wavelet reconstruction as well as it is possible to do so. A new inequality inmultivariate normal decision theory which wecallthe oracle inequality shows that attained performance di ers from ideal performance by at most a factor 2logn, where n is the sample size. Moreover no estimator can give a better guarantee than this. Within the class of spatially adaptive procedures, RiskShrink is essentially optimal. Relying only on the data, it comes within a factor log 2 n of the performance of piecewise polynomial and variableknot spline methods equipped with an oracle. In contrast, it is unknown how or if piecewise polynomial methods could be made to function this well when denied access to an oracle and forced to rely on data alone.
Bivariate Tensorproduct BSplines in a Partly Linear Model
, 1996
"... : In some applications, the mean or median response is linearly related to some variables but the relation to additional variables are not easily parameterized. Partly linear models arise naturally in such circumstances. Suppose that a random sample f(T i ; X i ; Y i ); i = 1; 2; \Delta \Delta \Delt ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
: In some applications, the mean or median response is linearly related to some variables but the relation to additional variables are not easily parameterized. Partly linear models arise naturally in such circumstances. Suppose that a random sample f(T i ; X i ; Y i ); i = 1; 2; \Delta \Delta \Delta ; ng is modeled by Y i = X T i fi 0 + g 0 (T i ) + error i , where Y i is a realvalued response, X i 2 R p and T i ranges over a unit square, and g 0 is an unknown function with a certain degree of smoothness. We make use of bivariate tensorproduct Bsplines as an approximation of the function g 0 and consider Mtype regression splines by minimization of P n i=1 ae(Y i \Gamma X T i fi \Gamma g n (T i )) for some convex function ae. Mean, median and quantile regressions are included in this class. We show under appropriate conditions that the parameter estimate of fi achieves its information bound asymptotically and the function estimate of g 0 attains the optimal rate of convergen...
Automatic Selection of Parameters in Spline Regression via KullbackLeibler Information
, 1993
"... Based on KullbackLeibler information we propose a datadriven selector, called GAIC (c) , for choosing parameters of regression splines in nonparametric regression via a stepwise forward/backward knot placement and deletion strategy [1] . This criterion unifies the commonly used information cr ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Based on KullbackLeibler information we propose a datadriven selector, called GAIC (c) , for choosing parameters of regression splines in nonparametric regression via a stepwise forward/backward knot placement and deletion strategy [1] . This criterion unifies the commonly used information criteria and includes the Akaike information criterion (AIC) [2] and the corrected Akaike information criterion (AICC) [3] as special cases. To show the performance of GAIC (c) for c = 1=2, 3=4, 7=8, and 15=16, we compare it with crossvalidation (CV), the generalized crossvalidation (GCV), AIC, and AICC by an extensive simulation. Applications to the selection of the penalty parameters of smoothing splines are also discussed. Our simulation results indicate that the information criteria work well and are superior to crossvalidationbased criteria in most of the cases considered, particularly in small sample cases. Under certain mild conditions, GAIC (c) is shown to be asymptotically...
MType Regression Splines Involving Time Series
"... Consider a strictly stationary time series Zb=f(X i ; Y i ) : i = 1; 2; \Delta \Delta \Deltag with X i being R d  valued and Y i realvalued. The nonparametric Mtype regression function g 0 (\Delta) is defined by E(\Psi(Y 1 \Gamma g 0 (X 1 )) j X 1 = x) = 0. Tensor products of Bsplines are adop ..."
Abstract
 Add to MetaCart
Consider a strictly stationary time series Zb=f(X i ; Y i ) : i = 1; 2; \Delta \Delta \Deltag with X i being R d  valued and Y i realvalued. The nonparametric Mtype regression function g 0 (\Delta) is defined by E(\Psi(Y 1 \Gamma g 0 (X 1 )) j X 1 = x) = 0. Tensor products of Bsplines are adopted to approximate g 0 and a class of Mtype regression spline estimators of this function are obtained based on a segment, (X 1 ; Y 1 ); \Delta \Delta \Delta ; (X n ; Y n ), of Z. Suppose that g 0 (\Delta) is smooth up to order r (? d=2). Under certain regularity conditions, the Mtype regression spline estimators can achieve the optimal rates of convergence n \Gammar=(2r+d) in L 2 norms restricted to a compact domain when the spline knots are deterministically given. The Mestimators considered here include Huber's estimator, L 1 norm estimator, regression quantile estimator and L P norm estimator as special cases. Key words: Nonparametric regression, regression spline, optimal rate...