Results 1  10
of
39
Wavelet shrinkage: asymptopia
 Journal of the Royal Statistical Society, Ser. B
, 1995
"... Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators bein ..."
Abstract

Cited by 238 (35 self)
 Add to MetaCart
Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators being obtained for a variety of interesting problems. Unfortunately, the results have often not been translated into practice, for a variety of reasons { sometimes, similarity to known methods, sometimes, computational intractability, and sometimes, lack of spatial adaptivity. We discuss a method for curve estimation based on n noisy data; one translates the empirical wavelet coe cients towards the origin by an amount p p 2 log(n) = n. The method is di erent from methods in common use today, is computationally practical, and is spatially adaptive; thus it avoids a number of previous objections to minimax estimators. At the same time, the method is nearly minimax for a wide variety of loss functions { e.g. pointwise error, global error measured in L p norms, pointwise and global error in estimation of derivatives { and for a wide range of smoothness classes, including standard Holder classes, Sobolev classes, and Bounded Variation. This is amuch broader nearoptimality than anything previously proposed in the minimax literature. Finally, the theory underlying the method is interesting, as it exploits a correspondence between statistical questions and questions of optimal recovery and informationbased complexity.
Nonlinear solution of linear inverse problems by waveletvaguelette decomposition
, 1992
"... We describe the WaveletVaguelette Decomposition (WVD) of a linear inverse problem. It is a substitute for the singular value decomposition (SVD) of an inverse problem, and it exists for a class of special inverse problems of homogeneous type { such asnumerical di erentiation, inversion of Abeltype ..."
Abstract

Cited by 182 (12 self)
 Add to MetaCart
We describe the WaveletVaguelette Decomposition (WVD) of a linear inverse problem. It is a substitute for the singular value decomposition (SVD) of an inverse problem, and it exists for a class of special inverse problems of homogeneous type { such asnumerical di erentiation, inversion of Abeltype transforms, certain convolution transforms, and the Radon Transform. We propose to solve illposed linear inverse problems by nonlinearly \shrinking" the WVD coe cients of the noisy, indirect data. Our approach o ers signi cant advantages over traditional SVD inversion in the case of recovering spatially inhomogeneous objects. We suppose that observations are contaminated by white noise and that the object is an unknown element of a Besov space. We prove that nonlinear WVD shrinkage can be tuned to attain the minimax rate of convergence, for L 2 loss, over the entire Besov scale. The important case of Besov spaces Bp;q, p <2, which model spatial inhomogeneity, is included. In comparison, linear procedures { SVD included { cannot attain optimal rates of convergence over such classes in the case p<2. For example, our methods achieve faster rates of convergence, for objects known to lie in the Bump Algebra or in Bounded Variation, than any linear procedure.
Unconditional bases are optimal bases for data compression and for statistical estimation
 Applied and Computational Harmonic Analysis
, 1993
"... An orthogonal basis of L 2 which is also an unconditional basis of a functional space F is a kind of optimal basis for compressing, estimating, and recovering functions in F. Simple thresholding operations, applied in the unconditional basis, work essentially better for compressing, estimating, and ..."
Abstract

Cited by 140 (23 self)
 Add to MetaCart
An orthogonal basis of L 2 which is also an unconditional basis of a functional space F is a kind of optimal basis for compressing, estimating, and recovering functions in F. Simple thresholding operations, applied in the unconditional basis, work essentially better for compressing, estimating, and recovering than they do in any other orthogonal basis. In fact, simple thresholding in an unconditional basis works essentially better for recovery and estimation than other methods, period. (Performance is measured in an asymptotic minimax sense.) As an application, we formalize and prove Mallat's Heuristic, which says that wavelet bases are optimal for representing functions containing singularities, when there may be an arbitrary number of singularities, arbitrarily distributed.
Recovering Edges in IllPosed Inverse Problems: Optimality of Curvelet Frames
, 2000
"... We consider a model problem of recovering a function f(x1,x2) from noisy Radon data. The function f to be recovered is assumed smooth apart from a discontinuity along a C2 curve – i.e. an edge. We use the continuum white noise model, with noise level ɛ. Traditional linear methods for solving such in ..."
Abstract

Cited by 50 (14 self)
 Add to MetaCart
We consider a model problem of recovering a function f(x1,x2) from noisy Radon data. The function f to be recovered is assumed smooth apart from a discontinuity along a C2 curve – i.e. an edge. We use the continuum white noise model, with noise level ɛ. Traditional linear methods for solving such inverse problems behave poorly in the presence of edges. Qualitatively, the reconstructions are blurred near the edges; quantitatively, they give in our model Mean Squared Errors (MSEs) that tend to zero with noise level ɛ only as O(ɛ1/2)asɛ → 0. A recent innovation – nonlinear shrinkage in the wavelet domain – visually improves edge sharpness and improves MSE convergence to O(ɛ2/3). However, as we show here, this rate is not optimal. In fact, essentially optimal performance is obtained by deploying the recentlyintroduced tight frames of curvelets in this setting. Curvelets are smooth, highly anisotropic elements ideally suited for detecting and synthesizing curved edges. To deploy them in the Radon setting, we construct a curveletbased biorthogonal decomposition
Wavelet shrinkage for nonequispaced samples, The Annals of Statistics 26
, 1998
"... Standard wavelet shrinkage procedures for nonparametric regression are restricted to equispaced samples. There, data are transformed into empirical wavelet coefficients and threshold rules are applied to the coefficients. The estimators are obtained via the inverse transform of the denoised wavelet ..."
Abstract

Cited by 39 (3 self)
 Add to MetaCart
Standard wavelet shrinkage procedures for nonparametric regression are restricted to equispaced samples. There, data are transformed into empirical wavelet coefficients and threshold rules are applied to the coefficients. The estimators are obtained via the inverse transform of the denoised wavelet coefficients. In many applications, however, the samples are nonequispaced. It can be shown that these procedures would produce suboptimal estimators if they were applied directly to nonequispaced samples. We propose a wavelet shrinkage procedure for nonequispaced samples. We show that the estimate is adaptive and near optimal. For global estimation, the estimate is within a logarithmic factor of the minimax risk over a wide range of piecewise Hölder classes, indeed with a number of discontinuities that grows polynomially fast with the sample size. For estimating a target function at a point, the estimate is optimally adaptive to unknown degree of smoothness within a constant. In addition, the estimate enjoys a smoothness property: if the target function is the zero function, then with probability tending to 1 the estimate is also the zero function. (1.1) 1. Introduction. Suppose
Adaptive estimation of linear functionals in Hilbert scales from indirect white noise observations
 Fields
, 1999
"... We consider adaptive estimating the value of a linear functional from indirect white noise observations. For a flexible approach, the problem is embedded in an abstract Hilbert scale. We develop an adaptive estimator that is rate optimal within a logarithmic factor simultaneously over a wide collect ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
We consider adaptive estimating the value of a linear functional from indirect white noise observations. For a flexible approach, the problem is embedded in an abstract Hilbert scale. We develop an adaptive estimator that is rate optimal within a logarithmic factor simultaneously over a wide collection of balls in the Hilbert scale. It is shown that the proposed estimator has the best possible adaptive properties for a wide range of linear functionals. The case of discretized indirect white noise observations is studied, and the adaptive estimator in this setting is developed. Keywords: adaptive estimation, discretization, Hilbert scales, inverse problems, linear functionals, regularization, minimax risk. Running title: Adaptive inverse estimation of linear functionals Department of Statistics, University of Haifa, Mount Carmel, Haifa 31905, Israel. email: goldensh@rstat.haifa.ac.il y Ukrainian Academy of Sciences, Institute of Mathematics, Tereshenkivska str. 3, 252601 Kiev4, Uk...
Neoclassical minimax problems, thresholding and adaptive function estimation Bernoulli
, 1996
"... 2 We study the problem of estimating from data Y N ( ; ) under squarederror loss. We de ne three new scalar minimax problems in which the risk is weighted by the size of. Simple thresholding gives asymptotically minimax estimates of all three problems. We indicate the relationships of the new probl ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
2 We study the problem of estimating from data Y N ( ; ) under squarederror loss. We de ne three new scalar minimax problems in which the risk is weighted by the size of. Simple thresholding gives asymptotically minimax estimates of all three problems. We indicate the relationships of the new problems to each other and to two other neoclassical problems: the problems of the bounded normal mean and of the riskconstrained normal mean. Via the wavelet transform, these results have implications for adaptive function estimation, to: (1) estimating functions of unknown type and degree of smoothness in a global ` 2 norm; (2) estimating a function of unknown degree of local Holder smoothness at a xed point. In setting (2), the scalar minimax results imply: (a) that it is not possible to fully adapt to unknown degree of smoothness { adaptation imposes a performance cost; and (b) that simple thresholding of the empirical wavelet transform gives an estimate of a function at a xed point which is, to within constants, optimally adaptive to unknown degree of smoothness.
Minimax Risk Bounds in Extreme Value Theory
 Statist
, 2001
"... Introduction. Consider i.i.d. random variables X i , i 2 N, whose distribution function (d.f.) F belongs to the weak domain of attraction of an extreme value d.f. G , i.e., L a 1 n max 1in X i b n ! G weakly for some constants an > 0 and b n 2 R. Here G (x) = exp( (1 + x) 1= ) for ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Introduction. Consider i.i.d. random variables X i , i 2 N, whose distribution function (d.f.) F belongs to the weak domain of attraction of an extreme value d.f. G , i.e., L a 1 n max 1in X i b n ! G weakly for some constants an > 0 and b n 2 R. Here G (x) = exp( (1 + x) 1= ) for 1 + x > 0, which is interpreted as G 0 (x) = exp( e x ) if = 0. The shape of the upper tail of F<F12.24
Minimax Expected Measure Confidence Sets for Restricted Location Parameters
, 2003
"... This paper studies how to construct confidence sets that are as small as they can be, in the sense of minimizing worstcase expected measure, while attaining at least their nominal confidence level. The structure required to study expected measure is both more and less restrictive than that used tra ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
This paper studies how to construct confidence sets that are as small as they can be, in the sense of minimizing worstcase expected measure, while attaining at least their nominal confidence level. The structure required to study expected measure is both more and less restrictive than that used traditionally to study accuracy: The set of possible parameter values must be a measurable space, and the confidence sets must be measurable subsets of the set of parameters, but confidence sets with minimax expected measure can exist even when there is no uniformly most accurate confidence set. See 3
Adaptive NonParametric Estimation of Smooth Multivariate Functions
 ADAPTIVE ESTIMATION 17
, 1999
"... Adaptive pointwise estimation of smooth functions f(x) in R is studied in the white Gaussian noise model of a given intensity " ! 0. It is assumed that the Fourier transform of f belongs to a large class of rapidly vanishing functions but is otherwise unknown. Optimal adaptation in higher dimensio ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Adaptive pointwise estimation of smooth functions f(x) in R is studied in the white Gaussian noise model of a given intensity " ! 0. It is assumed that the Fourier transform of f belongs to a large class of rapidly vanishing functions but is otherwise unknown. Optimal adaptation in higher dimensions presents several challenges. First, the number of essentially different estimates having a given variance " S increases polynomially, : Second, the set of possible estimators, totally ordered when d = 1; becomes only partially ordered when d ? 1: We demonstrate how these challenges can be met. The first one is to be matched by a meticulous choice of the estimators' net. The key to solving the second problem lies in a new method of spectral majorants introduced in this paper. Extending our earlier approach used in [12] , we restrict ourselves to a family of estimators, rateefficient in an offbeat case of partially parametric functional classes. A proposed adaptive procedure is shown to be asymptotically minimax, simultaneously for any ample regular nonparametric family of underlying functions f .