Results 1  10
of
24
Unconditional bases are optimal bases for data compression and for statistical estimation
 Applied and Computational Harmonic Analysis
, 1993
"... An orthogonal basis of L 2 which is also an unconditional basis of a functional space F is a kind of optimal basis for compressing, estimating, and recovering functions in F. Simple thresholding operations, applied in the unconditional basis, work essentially better for compressing, estimating, and ..."
Abstract

Cited by 140 (23 self)
 Add to MetaCart
An orthogonal basis of L 2 which is also an unconditional basis of a functional space F is a kind of optimal basis for compressing, estimating, and recovering functions in F. Simple thresholding operations, applied in the unconditional basis, work essentially better for compressing, estimating, and recovering than they do in any other orthogonal basis. In fact, simple thresholding in an unconditional basis works essentially better for recovery and estimation than other methods, period. (Performance is measured in an asymptotic minimax sense.) As an application, we formalize and prove Mallat's Heuristic, which says that wavelet bases are optimal for representing functions containing singularities, when there may be an arbitrary number of singularities, arbitrarily distributed.
Recovering Edges in IllPosed Inverse Problems: Optimality of Curvelet Frames
, 2000
"... We consider a model problem of recovering a function f(x1,x2) from noisy Radon data. The function f to be recovered is assumed smooth apart from a discontinuity along a C2 curve – i.e. an edge. We use the continuum white noise model, with noise level ɛ. Traditional linear methods for solving such in ..."
Abstract

Cited by 50 (14 self)
 Add to MetaCart
We consider a model problem of recovering a function f(x1,x2) from noisy Radon data. The function f to be recovered is assumed smooth apart from a discontinuity along a C2 curve – i.e. an edge. We use the continuum white noise model, with noise level ɛ. Traditional linear methods for solving such inverse problems behave poorly in the presence of edges. Qualitatively, the reconstructions are blurred near the edges; quantitatively, they give in our model Mean Squared Errors (MSEs) that tend to zero with noise level ɛ only as O(ɛ1/2)asɛ → 0. A recent innovation – nonlinear shrinkage in the wavelet domain – visually improves edge sharpness and improves MSE convergence to O(ɛ2/3). However, as we show here, this rate is not optimal. In fact, essentially optimal performance is obtained by deploying the recentlyintroduced tight frames of curvelets in this setting. Curvelets are smooth, highly anisotropic elements ideally suited for detecting and synthesizing curved edges. To deploy them in the Radon setting, we construct a curveletbased biorthogonal decomposition
Adaptive estimation of linear functionals in Hilbert scales from indirect white noise observations
 Fields
, 1999
"... We consider adaptive estimating the value of a linear functional from indirect white noise observations. For a flexible approach, the problem is embedded in an abstract Hilbert scale. We develop an adaptive estimator that is rate optimal within a logarithmic factor simultaneously over a wide collect ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
We consider adaptive estimating the value of a linear functional from indirect white noise observations. For a flexible approach, the problem is embedded in an abstract Hilbert scale. We develop an adaptive estimator that is rate optimal within a logarithmic factor simultaneously over a wide collection of balls in the Hilbert scale. It is shown that the proposed estimator has the best possible adaptive properties for a wide range of linear functionals. The case of discretized indirect white noise observations is studied, and the adaptive estimator in this setting is developed. Keywords: adaptive estimation, discretization, Hilbert scales, inverse problems, linear functionals, regularization, minimax risk. Running title: Adaptive inverse estimation of linear functionals Department of Statistics, University of Haifa, Mount Carmel, Haifa 31905, Israel. email: goldensh@rstat.haifa.ac.il y Ukrainian Academy of Sciences, Institute of Mathematics, Tereshenkivska str. 3, 252601 Kiev4, Uk...
Neoclassical minimax problems, thresholding and adaptive function estimation Bernoulli
, 1996
"... 2 We study the problem of estimating from data Y N ( ; ) under squarederror loss. We de ne three new scalar minimax problems in which the risk is weighted by the size of. Simple thresholding gives asymptotically minimax estimates of all three problems. We indicate the relationships of the new probl ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
2 We study the problem of estimating from data Y N ( ; ) under squarederror loss. We de ne three new scalar minimax problems in which the risk is weighted by the size of. Simple thresholding gives asymptotically minimax estimates of all three problems. We indicate the relationships of the new problems to each other and to two other neoclassical problems: the problems of the bounded normal mean and of the riskconstrained normal mean. Via the wavelet transform, these results have implications for adaptive function estimation, to: (1) estimating functions of unknown type and degree of smoothness in a global ` 2 norm; (2) estimating a function of unknown degree of local Holder smoothness at a xed point. In setting (2), the scalar minimax results imply: (a) that it is not possible to fully adapt to unknown degree of smoothness { adaptation imposes a performance cost; and (b) that simple thresholding of the empirical wavelet transform gives an estimate of a function at a xed point which is, to within constants, optimally adaptive to unknown degree of smoothness.
Local Polynomial Fitting: A Standard for Nonparametric Regression
, 1993
"... Among the various nonparametric regression methods, weighted local polynomial fitting is the one which is gaining increasing popularity. This is due to the attractive minimax efficiency of the method and to some further desirable properties such as the automatic incorporation of boundary treatment. ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
Among the various nonparametric regression methods, weighted local polynomial fitting is the one which is gaining increasing popularity. This is due to the attractive minimax efficiency of the method and to some further desirable properties such as the automatic incorporation of boundary treatment. In this paper previous results are extended in two directions: in the onedimensional case, not only local linear fitting is considered but also polynomials of other orders and estimating derivatives. In addition to deriving minimax properties, optimal weighting schemes are derived and the solution obtained at the boundary is discussed in some detail. An equivalent. kernel formulation serves as a tool to derive many of these properties. In the higher dimensional case local linear fitting is considered. Properties in terms of minimax efficiency are derived and optimal weighting
Universal Near Minimaxity of Wavelet Shrinkage
, 1995
"... We discuss a method for curve estimation based on n noisy data; one translates the empirical wavelet coefficients towards the origin by an amount p 2 log(n) \Delta oe= p n. The method is nearly minimax for a wide variety of loss functions  e.g. pointwise error, global error measured in L p ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
We discuss a method for curve estimation based on n noisy data; one translates the empirical wavelet coefficients towards the origin by an amount p 2 log(n) \Delta oe= p n. The method is nearly minimax for a wide variety of loss functions  e.g. pointwise error, global error measured in L p norms, pointwise and global error in estimation of derivatives  and for a wide range of smoothness classes, including standard Holder classes, Sobolev classes, and Bounded Variation. This is a broader nearoptimality than anything previously proposed in the minimax literature. The theory underlying the method exploits a correspondence between statistical questions and questions of optimal recovery and informationbased complexity. This paper contains a detailed proof of the result announced in Donoho, Johnstone, Kerkyacharian & Picard (1995).
Inverse Problems as Statistics
 INVERSE PROBLEMS
, 2002
"... What mathematicians, scientists, engineers, and statisticians mean by "inverse problem" differs. For a statistician, an inverse problem is an inference or estimation problem... ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
What mathematicians, scientists, engineers, and statisticians mean by "inverse problem" differs. For a statistician, an inverse problem is an inference or estimation problem...
Minimax Risk Bounds in Extreme Value Theory
 Statist
, 2001
"... Introduction. Consider i.i.d. random variables X i , i 2 N, whose distribution function (d.f.) F belongs to the weak domain of attraction of an extreme value d.f. G , i.e., L a 1 n max 1in X i b n ! G weakly for some constants an > 0 and b n 2 R. Here G (x) = exp( (1 + x) 1= ) for ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Introduction. Consider i.i.d. random variables X i , i 2 N, whose distribution function (d.f.) F belongs to the weak domain of attraction of an extreme value d.f. G , i.e., L a 1 n max 1in X i b n ! G weakly for some constants an > 0 and b n 2 R. Here G (x) = exp( (1 + x) 1= ) for 1 + x > 0, which is interpreted as G 0 (x) = exp( e x ) if = 0. The shape of the upper tail of F<F12.24
Minimax Expected Measure Confidence Sets for Restricted Location Parameters
, 2003
"... This paper studies how to construct confidence sets that are as small as they can be, in the sense of minimizing worstcase expected measure, while attaining at least their nominal confidence level. The structure required to study expected measure is both more and less restrictive than that used tra ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
This paper studies how to construct confidence sets that are as small as they can be, in the sense of minimizing worstcase expected measure, while attaining at least their nominal confidence level. The structure required to study expected measure is both more and less restrictive than that used traditionally to study accuracy: The set of possible parameter values must be a measurable space, and the confidence sets must be measurable subsets of the set of parameters, but confidence sets with minimax expected measure can exist even when there is no uniformly most accurate confidence set. See 3
MINIMAX ESTIMATION OF LINEAR FUNCTIONALS OVER NONCONVEX PARAMETER SPACES
, 2004
"... The minimax theory for estimating linear functionals is extended to the case of a finite union of convex parameter spaces. Upper and lower bounds for the minimax risk can still be described in terms of a modulus of continuity. However in contrast to the theory for convex parameter spaces rate optima ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
The minimax theory for estimating linear functionals is extended to the case of a finite union of convex parameter spaces. Upper and lower bounds for the minimax risk can still be described in terms of a modulus of continuity. However in contrast to the theory for convex parameter spaces rate optimal procedures are often required to be nonlinear. A construction of such nonlinear procedures is given. The results developed in this paper have important applications to the theory of adaptation.