Results 1  10
of
12
DeNoising By SoftThresholding
, 1992
"... Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti)+ zi, iid i =0;:::;n 1, ti = i=n, zi N(0; 1). The reconstruction fn ^ is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0 by an a ..."
Abstract

Cited by 798 (13 self)
 Add to MetaCart
Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti)+ zi, iid i =0;:::;n 1, ti = i=n, zi N(0; 1). The reconstruction fn ^ is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0 by an amount p 2 log(n) = p n. We prove two results about that estimator. [Smooth]: With high probability ^ fn is at least as smooth as f, in any of a wide variety of smoothness measures. [Adapt]: The estimator comes nearly as close in mean square to f as any measurable estimator can come, uniformly over balls in each of two broad scales of smoothness classes. These two properties are unprecedented in several ways. Our proof of these results develops new facts about abstract statistical inference and its connection with an optimal recovery model.
Adapting to unknown smoothness via wavelet shrinkage
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 1995
"... We attempt to recover a function of unknown smoothness from noisy, sampled data. We introduce a procedure, SureShrink, which suppresses noise by thresholding the empirical wavelet coefficients. The thresholding is adaptive: a threshold level is assigned to each dyadic resolution level by the princip ..."
Abstract

Cited by 675 (19 self)
 Add to MetaCart
We attempt to recover a function of unknown smoothness from noisy, sampled data. We introduce a procedure, SureShrink, which suppresses noise by thresholding the empirical wavelet coefficients. The thresholding is adaptive: a threshold level is assigned to each dyadic resolution level by the principle of minimizing the Stein Unbiased Estimate of Risk (Sure) for threshold estimates. The computational effort of the overall procedure is order N log(N) as a function of the sample size N. SureShrink is smoothnessadaptive: if the unknown function contains jumps, the reconstruction (essentially) does also; if the unknown function has a smooth piece, the reconstruction is (essentially) as smooth as the mother wavelet will allow. The procedure is in a sense optimally smoothnessadaptive: it is nearminimax simultaneously over a whole interval of the Besov scale; the size of this interval depends on the choice of mother wavelet. We know from a previous paper by the authors that traditional smoothing methods  kernels, splines, and orthogonal series estimates  even with optimal choices of the smoothing parameter, would be unable to perform
Wavelet shrinkage: asymptopia
 Journal of the Royal Statistical Society, Ser. B
, 1995
"... Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators bein ..."
Abstract

Cited by 239 (35 self)
 Add to MetaCart
Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators being obtained for a variety of interesting problems. Unfortunately, the results have often not been translated into practice, for a variety of reasons { sometimes, similarity to known methods, sometimes, computational intractability, and sometimes, lack of spatial adaptivity. We discuss a method for curve estimation based on n noisy data; one translates the empirical wavelet coe cients towards the origin by an amount p p 2 log(n) = n. The method is di erent from methods in common use today, is computationally practical, and is spatially adaptive; thus it avoids a number of previous objections to minimax estimators. At the same time, the method is nearly minimax for a wide variety of loss functions { e.g. pointwise error, global error measured in L p norms, pointwise and global error in estimation of derivatives { and for a wide range of smoothness classes, including standard Holder classes, Sobolev classes, and Bounded Variation. This is amuch broader nearoptimality than anything previously proposed in the minimax literature. Finally, the theory underlying the method is interesting, as it exploits a correspondence between statistical questions and questions of optimal recovery and informationbased complexity.
Oracle Inequalities for Inverse Problems
, 2000
"... We consider a sequence space model of statistical linear inverse problems where we need to estimate a function f from indirect noisy observations. Let a finite set of linear estimators be given. Our aim is to mimic the estimator in that has the smallest risk on the true f . Under general conditions, ..."
Abstract

Cited by 43 (6 self)
 Add to MetaCart
We consider a sequence space model of statistical linear inverse problems where we need to estimate a function f from indirect noisy observations. Let a finite set of linear estimators be given. Our aim is to mimic the estimator in that has the smallest risk on the true f . Under general conditions, we show that this can be achieved by simple minimization of unbiased risk estimator, provided the singular values of the operator of the inverse problem decrease as a power law. The main result is a nonasymptotic oracle inequality that is shown to be asymptotically exact. This inequality can be also used to obtain sharp minimax adaptive results. In particular, we apply it to show that minimax adaptation on ellipsoids in multivariate anisotropic case is realized by minimization of unbiased risk estimator without any loss of efficiency with respect to optimal nonadaptive procedures. Mathematics Subject Classifications: 62G05, 62G20 Key Words: Statistical inverse problems, Oracle inequaliti...
Sharp Adaptation for Inverse Problems With Random Noise
, 2000
"... We consider a heteroscedastic sequence space setup with polynomially increasing variances of observations that allows to treat a number of inverse problems, in particular multivariate ones. We propose an adaptive estimator that attains simultaneously exact asymptotic minimax constants on every ellip ..."
Abstract

Cited by 41 (6 self)
 Add to MetaCart
We consider a heteroscedastic sequence space setup with polynomially increasing variances of observations that allows to treat a number of inverse problems, in particular multivariate ones. We propose an adaptive estimator that attains simultaneously exact asymptotic minimax constants on every ellipsoid of functions within a wide scale (that includes ellipoids with polynomially and exponentially decreasing axes) and, at the same time, satisfies asymptotically exact oracle inequalities within any class of linear estimates having monotone nondecreasing weights. As application, we construct sharp adaptive estimators in the problems of deconvolution and tomography.
REACT Scatterplot Smoothers: Superefficiency through Basis Economy
 J. AMER. STATIST. ASSOC
, 1999
"... ..."
SHARP ADAPTIVE ESTIMATION OF THE DRIFT FUNCTION FOR ERGODIC DIFFUSIONS
, 2006
"... The global estimation problem of the drift function is considered for a large class of ergodic diffusion processes. The unknown drift S(·) is supposed to belong to a nonparametric class of smooth functions of order k ≥ 1, but the value of k is not known to the statistician. A fully datadriven proce ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
The global estimation problem of the drift function is considered for a large class of ergodic diffusion processes. The unknown drift S(·) is supposed to belong to a nonparametric class of smooth functions of order k ≥ 1, but the value of k is not known to the statistician. A fully datadriven procedure of estimating the drift function is proposed, using the estimated risk minimization method. The sharp adaptivity of this procedure is proven up to an optimal constant, when the quality of the estimation is measured by the integrated squared error weighted by the square of the invariant density. 1. Introduction. 1.1. The problem. In this paper we consider the statistical problem of estimating the drift function of a diffusion process X, given as the solution of the stochastic differential equation
Adaptive Bayesian estimation using a Gaussian random field with inverse Gamma bandwidth. The Annals of Statistics 37
, 2009
"... We consider nonparametric Bayesian estimation inference using a rescaled smooth Gaussian field as a prior for a multidimensional function. The rescaling is achieved using a Gamma variable and the procedure can be viewed as choosing an inverse Gamma bandwidth. The procedure is studied from a frequent ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We consider nonparametric Bayesian estimation inference using a rescaled smooth Gaussian field as a prior for a multidimensional function. The rescaling is achieved using a Gamma variable and the procedure can be viewed as choosing an inverse Gamma bandwidth. The procedure is studied from a frequentist perspective in three statistical settings involving replicated observations (density estimation, regression and classification). We prove that the resulting posterior distribution shrinks to the distribution that generates the data at a speed which is minimaxoptimal up to a logarithmic factor, whatever the regularity level of the datagenerating distribution. Thus the hierachical Bayesian procedure, with a fixed prior, is shown to be fully adaptive. 1. Introduction. The
Asymptotically Minimax Nonparametric Regression in L2
 in L 2 . Statistics
, 1996
"... Introduction In the nonparametric regression context, the notion of asymptotic optimality usually associates with the "optimal rate of convergence". Minimax rates of convergence have been extensively studied (Ibragimov and Hasminskii (1980), (1982); Stone (1980), (1982) and many others). Different ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Introduction In the nonparametric regression context, the notion of asymptotic optimality usually associates with the "optimal rate of convergence". Minimax rates of convergence have been extensively studied (Ibragimov and Hasminskii (1980), (1982); Stone (1980), (1982) and many others). Different estimators turn out to be optimal in the sense of the best rate of convergence. We mention only some of them: kernel estimators (Ibragimov and Hasminskii (1980), Korostelev (1993)), projection estimators (Ibragimov and Hasminskii (1981)), spline estimators (Speckman (1985), Nussbaum (1985)), wavelets (Donoho and Johnstone (1992)). From the practical point of view stochastic approximation estimators considered in Belitser and Korostelev (1992) are also of interest. However, comparing estimators on the basis of their rates of convergence does not make it possible to distinguish among estimators optimal in that sence. Also from a more practical point of view, such approach does not give
ADAPTIVE NONPARAMETRIC CONFIDENCE SETS
, 2006
"... We construct honest confidence regions for a Hilbert spacevalued parameter in various statistical models. The confidence sets can be centered at arbitrary adaptive estimators, and have diameter which adapts optimally to a given selection of models. The latter adaptation is necessarily limited in sc ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We construct honest confidence regions for a Hilbert spacevalued parameter in various statistical models. The confidence sets can be centered at arbitrary adaptive estimators, and have diameter which adapts optimally to a given selection of models. The latter adaptation is necessarily limited in scope. We review the notion of adaptive confidence regions, and relate the optimal rates of the diameter of adaptive confidence regions to the minimax rates for testing and estimation. Applications include the finite normal mean model, the white noise model, density estimation and regression with random design.