Results 1  10
of
71
Ideal spatial adaptation by wavelet shrinkage
 Biometrika
, 1994
"... With ideal spatial adaptation, an oracle furnishes information about how best to adapt a spatially variable estimator, whether piecewise constant, piecewise polynomial, variable knot spline, or variable bandwidth kernel, to the unknown function. Estimation with the aid of an oracle o ers dramatic ad ..."
Abstract

Cited by 838 (4 self)
 Add to MetaCart
With ideal spatial adaptation, an oracle furnishes information about how best to adapt a spatially variable estimator, whether piecewise constant, piecewise polynomial, variable knot spline, or variable bandwidth kernel, to the unknown function. Estimation with the aid of an oracle o ers dramatic advantages over traditional linear estimation by nonadaptive kernels � however, it is a priori unclear whether such performance can be obtained by a procedure relying on the data alone. We describe a new principle for spatiallyadaptive estimation: selective wavelet reconstruction. Weshowthatvariableknot spline ts and piecewisepolynomial ts, when equipped with an oracle to select the knots, are not dramatically more powerful than selective wavelet reconstruction with an oracle. We develop a practical spatially adaptive method, RiskShrink, which works by shrinkage of empirical wavelet coe cients. RiskShrink mimics the performance of an oracle for selective wavelet reconstruction as well as it is possible to do so. A new inequality inmultivariate normal decision theory which wecallthe oracle inequality shows that attained performance di ers from ideal performance by at most a factor 2logn, where n is the sample size. Moreover no estimator can give a better guarantee than this. Within the class of spatially adaptive procedures, RiskShrink is essentially optimal. Relying only on the data, it comes within a factor log 2 n of the performance of piecewise polynomial and variableknot spline methods equipped with an oracle. In contrast, it is unknown how or if piecewise polynomial methods could be made to function this well when denied access to an oracle and forced to rely on data alone.
Wavelet shrinkage: asymptopia
 Journal of the Royal Statistical Society, Ser. B
, 1995
"... Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators bein ..."
Abstract

Cited by 239 (35 self)
 Add to MetaCart
Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators being obtained for a variety of interesting problems. Unfortunately, the results have often not been translated into practice, for a variety of reasons { sometimes, similarity to known methods, sometimes, computational intractability, and sometimes, lack of spatial adaptivity. We discuss a method for curve estimation based on n noisy data; one translates the empirical wavelet coe cients towards the origin by an amount p p 2 log(n) = n. The method is di erent from methods in common use today, is computationally practical, and is spatially adaptive; thus it avoids a number of previous objections to minimax estimators. At the same time, the method is nearly minimax for a wide variety of loss functions { e.g. pointwise error, global error measured in L p norms, pointwise and global error in estimation of derivatives { and for a wide range of smoothness classes, including standard Holder classes, Sobolev classes, and Bounded Variation. This is amuch broader nearoptimality than anything previously proposed in the minimax literature. Finally, the theory underlying the method is interesting, as it exploits a correspondence between statistical questions and questions of optimal recovery and informationbased complexity.
Flexible smoothing with Bsplines and penalties
 Statistical Science
, 1996
"... Bsplines are attractive for nonparametric modelling, but choosing the optimal number and positions of knots is a complex task. Equidistant knots can be used, but their small and discrete number allows only limited control over smoothness and fit. We propose to use a relatively large number of knots ..."
Abstract

Cited by 178 (3 self)
 Add to MetaCart
Bsplines are attractive for nonparametric modelling, but choosing the optimal number and positions of knots is a complex task. Equidistant knots can be used, but their small and discrete number allows only limited control over smoothness and fit. We propose to use a relatively large number of knots and a difference penalty on coefficients of adjacent Bsplines. We show connections to the familiar spline penalty on the integral of the squared second derivative. A short overview of Bsplines, their construction, and penalized likelihood is presented. We discuss properties of penalized Bsplines and propose various criteria for the choice of an optimal penalty parameter. Nonparametric logistic regression, density estimation and scatterplot smoothing are used as examples. Some details of the computations are presented. Keywords: Generalized linear models, smoothing, nonparametric models, splines, density estimation. Address for correspondence: DCMR Milieudienst Rijnmond, 'sGravelandse...
Polynomial Splines and Their Tensor Products in Extended Linear Modeling
 Ann. Statist
, 1997
"... ANOVA type models are considered for a regression function or for the logarithm of a probability function, conditional probability function, density function, conditional density function, hazard function, conditional hazard function, or spectral density function. Polynomial splines are used to m ..."
Abstract

Cited by 142 (14 self)
 Add to MetaCart
ANOVA type models are considered for a regression function or for the logarithm of a probability function, conditional probability function, density function, conditional density function, hazard function, conditional hazard function, or spectral density function. Polynomial splines are used to model the main effects, and their tensor products are used to model any interaction components that are included. In the special context of survival analysis, the baseline hazard function is modeled and nonproportionality is allowed. In general, the theory involves the L 2 rate of convergence for the fitted model and its components. The methodology involves least squares and maximum likelihood estimation, stepwise addition of basis functions using Rao statistics, stepwise deletion using Wald statistics, and model selection using BIC, crossvalidation or an independent test set. Publically available software, written in C and interfaced to S/SPLUS, is used to apply this methodology to...
Nonparametric regression using Bayesian variable selection
 Journal of Econometrics
, 1996
"... This paper estimates an additive model semiparametrically, while automatically selecting the significant independent variables and the app~opriatc power transformation of the dependent variable. The nonlinear variables arc modeled as regression splincs, with significant knots selected fiom a large ..."
Abstract

Cited by 136 (10 self)
 Add to MetaCart
This paper estimates an additive model semiparametrically, while automatically selecting the significant independent variables and the app~opriatc power transformation of the dependent variable. The nonlinear variables arc modeled as regression splincs, with significant knots selected fiom a large number of candidate knots. The estimation is made robust by modeling the errors as a mixture of normals. A Bayesian approach is used to select the significant knots, the power transformation, and to identify oatliers using the Gibbs sampler to curry out the computation. Empirical evidence is given that the sampler works well on both simulated and real examples and that in the univariate case it compares faw)rably with a kernelweighted local linear smoother, The variable selection algorithm in the paper is substantially fasler than previous Bayesian variable sclcclion algorithms. K('I ' word~': Additive nlodel, Pov¢¢r Iransformalio:l: Robust cslinlalion
Basis Pursuit
, 1994
"... The TimeFrequency and TimeScale communities have recently developed an enormous number of overcomplete signal dictionaries  wavelets, wavelet packets, cosine packets, wilson bases, chirplets, warped bases, and hyperbolic cross bases being a few examples. Basis Pursuit is a technique for decompos ..."
Abstract

Cited by 119 (15 self)
 Add to MetaCart
The TimeFrequency and TimeScale communities have recently developed an enormous number of overcomplete signal dictionaries  wavelets, wavelet packets, cosine packets, wilson bases, chirplets, warped bases, and hyperbolic cross bases being a few examples. Basis Pursuit is a technique for decomposing a signal into an "optimal" superposition of dictionary elements. The optimization criterion is the l 1 norm of coefficients. The method has several advantages over Matching Pursuit and Best Ortho Basis, including superresolution and stability. 1 Introduction Over the last five years or so, there has been an explosion of awareness of alternatives to traditional signal representations. Instead of just representing objects as superpositions of sinusoids (the traditional Fourier representation) we now have available alternate dictionaries  signal representation schemes  of which the Wavelets dictionary is only the most wellknown. Wavelet dictionaries, Gabor dictionaries, Multiscale...
Linear smoothers and additive models
 The Annals of Statistics
, 1989
"... We study linear smoothers and their use in building nonparametric regression models. In part Qfthis paper we examine certain aspects of linear smoothers for scatterplots; examples of these are the running mean and running line, kernel, and cubic spline smoothers. The eigenvalue and singular value d ..."
Abstract

Cited by 70 (2 self)
 Add to MetaCart
We study linear smoothers and their use in building nonparametric regression models. In part Qfthis paper we examine certain aspects of linear smoothers for scatterplots; examples of these are the running mean and running line, kernel, and cubic spline smoothers. The eigenvalue and singular value decompositions of the corresponding smoother matrix are used to qualitatively describe a smoother, and several other topics such as the number of degrees of freedom of a smoother are discussed. In the second part of the paper we describe how Iinearsmoothers can be used to estimate the additive model, a powerful nonparametric regression model, using the "backfitting algorithm". We study the convergence of the backfitting algorithm and prove its convergence for a class of smoothers that includes cubic e:ttJlCl€~nt jJI:::Jll<l.li:6I;:U least squares. algorithm and ' dis.cuss ev'W()r(is: Neaparametric, seanparametric, regression, GaussSeidelalgorithm,
Bayesian PSplines
 Journal of Computational and Graphical Statistics
, 2004
"... Psplines are an attractive approach for modelling nonlinear smooth effects of covariates within the generalized additive and varying coefficient models framework. In this paper we propose a Bayesian version for Psplines and generalize the approach for one dimensional curves to two dimensional surf ..."
Abstract

Cited by 67 (21 self)
 Add to MetaCart
Psplines are an attractive approach for modelling nonlinear smooth effects of covariates within the generalized additive and varying coefficient models framework. In this paper we propose a Bayesian version for Psplines and generalize the approach for one dimensional curves to two dimensional surface fitting for modelling interactions between metrical covariates. A Bayesian approach to Psplines has the advantage of allowing for simultaneous estimation of smooth functions and smoothing parameters. Moreover, it can easily be extended to more complex formulations, for example to mixed models with random effects for serially or spatially correlated response. Additionally, the assumption of constant smoothing parameters can be replaced by allowing the smoothing parameters to be locally adaptive. This is particularly useful in situations with changing curvature of the underlying smooth function or where the function is highly oscillating. Inference is fully Bayesian and uses recent MCMC techniques for drawing random samples from the posterior. In a couple of simulation studies the performance of Bayesian Psplines is studied and compared to other approaches in the literature. We illustrate the approach by a complex application on rents for flats in Munich.
Hybrid Adaptive Splines
 Journal of the American Statistical Association
, 1995
"... . An adaptive spline method for smoothing is proposed which combines features from both regression spline and smoothing spline approaches. One of its advantages is the ability to vary the amount of smoothing in response to the inhomogeneous "curvature" of true functions at different locations. This ..."
Abstract

Cited by 61 (6 self)
 Add to MetaCart
. An adaptive spline method for smoothing is proposed which combines features from both regression spline and smoothing spline approaches. One of its advantages is the ability to vary the amount of smoothing in response to the inhomogeneous "curvature" of true functions at different locations. This method can be applied to many multivariate function estimation problems, which is illustrated in this paper by an application to smoothing temperature data on the globe. The performance of this method in a simulation study is found to be comparable to the Wavelet Shrinkage methods proposed by Donoho and Johnstone. The problem of how to count the degrees of freedom for an adaptively chosen set of basis functions is addressed. This issue arises also in the MARS procedure proposed by Friedman and other adaptive regression spline procedures. Key words and phrases: Smoothing, spatial adaptability, splines, stepwise regression, the inflated degrees of freedom for an adaptively chosen basis functi...
Logspline Density Estimation for Censored Data
 Journal of Computational and Graphical Statistics
, 1992
"... Logspline density estimation is developed for data that may be right censored, left censored or interval censored. A fully automatic method, which involves the maximum likelihood method and may involve stepwise knot deletion and either AIC or BIC, is used to determine the estimate; in solving the ..."
Abstract

Cited by 47 (16 self)
 Add to MetaCart
Logspline density estimation is developed for data that may be right censored, left censored or interval censored. A fully automatic method, which involves the maximum likelihood method and may involve stepwise knot deletion and either AIC or BIC, is used to determine the estimate; in solving the maximum likelihood equations, the NewtonRaphson method is augmented by occasional searches in the direction of steepest ascent. Also, a user interface based on S is described for obtaining estimates of the density function, distribution function and quantile function and for generating a random sample from the tted distribution.