Results 1  10
of
22
Oracle Inequalities for Inverse Problems
, 2000
"... We consider a sequence space model of statistical linear inverse problems where we need to estimate a function f from indirect noisy observations. Let a finite set of linear estimators be given. Our aim is to mimic the estimator in that has the smallest risk on the true f . Under general conditions, ..."
Abstract

Cited by 74 (9 self)
 Add to MetaCart
We consider a sequence space model of statistical linear inverse problems where we need to estimate a function f from indirect noisy observations. Let a finite set of linear estimators be given. Our aim is to mimic the estimator in that has the smallest risk on the true f . Under general conditions, we show that this can be achieved by simple minimization of unbiased risk estimator, provided the singular values of the operator of the inverse problem decrease as a power law. The main result is a nonasymptotic oracle inequality that is shown to be asymptotically exact. This inequality can be also used to obtain sharp minimax adaptive results. In particular, we apply it to show that minimax adaptation on ellipsoids in multivariate anisotropic case is realized by minimization of unbiased risk estimator without any loss of efficiency with respect to optimal nonadaptive procedures. Mathematics Subject Classifications: 62G05, 62G20 Key Words: Statistical inverse problems, Oracle inequaliti...
Wavelet shrinkage for nonequispaced samples
 THE ANNALS OF STATISTICS
, 1998
"... Standard wavelet shrinkage procedures for nonparametric regression are restricted to equispaced samples. There, data are transformed into empirical wavelet coefficients and threshold rules are applied to the coefficients. The estimators are obtained via the inverse transform of the denoised wavelet ..."
Abstract

Cited by 50 (4 self)
 Add to MetaCart
Standard wavelet shrinkage procedures for nonparametric regression are restricted to equispaced samples. There, data are transformed into empirical wavelet coefficients and threshold rules are applied to the coefficients. The estimators are obtained via the inverse transform of the denoised wavelet coefficients. In many applications, however, the samples are nonequispaced. It can be shown that these procedures would produce suboptimal estimators if they were applied directly to nonequispaced samples. We propose a wavelet shrinkage procedure for nonequispaced samples. We show that the estimate is adaptive and near optimal. For global estimation, the estimate is within a logarithmic factor of the minimax risk over a wide range of piecewise Hölder classes, indeed with a number of discontinuities that grows polynomially fast with the sample size. For estimating a target function at a point, the estimate is optimally adaptive to unknown degree of smoothness within a constant. In addition, the estimate enjoys a smoothness property: if the target function is the zero function, then with probability tending to 1 the estimate is also the zero function.
Maximal Spaces with given rate of convergence for thresholding algorithms
, 1999
"... this paper is to discuss the existence and the nature of maximal spaces in the context of nonlinear methods based on thresholding (or shrinkage) procedures. Before going further, some remarks should be made: ..."
Abstract

Cited by 46 (8 self)
 Add to MetaCart
(Show Context)
this paper is to discuss the existence and the nature of maximal spaces in the context of nonlinear methods based on thresholding (or shrinkage) procedures. Before going further, some remarks should be made:
Regression in Random Design and Warped Wavelets
 BERNOULLI,10
, 2004
"... We consider the problem of estimating an unknown function f in a regression setting with random design. Instead of expanding the function on a regular wavelet basis, we expand it on the basis jk (G), j, k} warped with the design. This allows to perform a very stable and computable thresholding alg ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
We consider the problem of estimating an unknown function f in a regression setting with random design. Instead of expanding the function on a regular wavelet basis, we expand it on the basis jk (G), j, k} warped with the design. This allows to perform a very stable and computable thresholding algorithm. We investigate the properties of this new basis. In particular, we prove that if the design has a property of Muckenhoupt type, this new basis has a behavior quite similar to a regular wavelet basis. This enables us to prove that the associated thresholding procedure achieves rates of convergence which have been proved to be minimax in the uniform design case.
Wavelet Methods For The Inversion Of Certain Homogeneous Linear Operators In The Presence Of Noisy Data
, 1994
"... In this dissertation we explore the use of wavelets in certain linear inverse problems with discrete, noisy data. We observe discrete samples of a process y(u) = (Kf)(u)+ z(u), where K is a linear operator, z is a noise process, and f is a function we wish to recover from the data. In the problems ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
In this dissertation we explore the use of wavelets in certain linear inverse problems with discrete, noisy data. We observe discrete samples of a process y(u) = (Kf)(u)+ z(u), where K is a linear operator, z is a noise process, and f is a function we wish to recover from the data. In the problems that we consider, the inverse of K, K \Gamma1 , either does not exist or is poorly behaved. Such problems are termed illposed i.e., ones in which small changes in the data may lead to large changes in the recovered version of f . Our methods are most effective for problems where the operator K is homogeneous with respect to dilations, such as integration, fractional integration, convolution, and the Radon transform. The theoretical framework in which we work is that of Donoho's (1992) WaveletVaguelette Decomposition (WVD). The WVD uses wavelets and vaguelettes (almost wavelets) to decompose the operator K. Although this formally resembles the Singular Value Decomposition (SVD), the use o...
An alternative point of view on Lepski's method
, 1999
"... Lepski's method is a method for choosing a "best" estimator (in an appropriate sense) among a family of those, under suitable restrictions on this family. The subject of this paper is to give a nonasymptotic presentation of Lepski's method in the context of Gaussian regression mo ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
Lepski's method is a method for choosing a "best" estimator (in an appropriate sense) among a family of those, under suitable restrictions on this family. The subject of this paper is to give a nonasymptotic presentation of Lepski's method in the context of Gaussian regression models for a collection of projection estimators on some nested family of finitedimensional linear subspaces. It is also shown that a suitable tuning of the method allows to asymptotically recover the best possible risk in the family.
Wavelet thresholding for nonnecessarily Gaussian noise: idealism
 Annals of Statistics
, 2003
"... For various types of noise (exponential, normal mixture, compactly supported,...) wavelet tresholding methods are studied. Problems linked to the existence of optimal thresholds are tackled, and minimaxity properties of the methods also analyzed. A coefficient dependent method for choosing threshold ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
(Show Context)
For various types of noise (exponential, normal mixture, compactly supported,...) wavelet tresholding methods are studied. Problems linked to the existence of optimal thresholds are tackled, and minimaxity properties of the methods also analyzed. A coefficient dependent method for choosing thresholds is also briefly presented. 1. Introduction. A
On Adaptive Wavelet Estimation of a Derivative And Other Related Linear Inverse Problems
 Journal of Statistical Planning and Inference
, 2000
"... We consider a block thresholding and vagueletwavelet approach to certain statistical linear inverse problems. Based on an oracle inequality, an adaptive block thresholding estimator for linear inverse problems is proposed and the asymptotic properties of the estimator are investigated. It is shown ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
We consider a block thresholding and vagueletwavelet approach to certain statistical linear inverse problems. Based on an oracle inequality, an adaptive block thresholding estimator for linear inverse problems is proposed and the asymptotic properties of the estimator are investigated. It is shown that the estimator enjoys a higher degree of adaptivity than the standard termbyterm thresholding methods; it attains the exact optimal rates of convergence over a range of Besov classes. The problem of estimating a derivative is considered in more detail as a test for the general estimation procedure. We show that the derivative estimator is spatially adaptive; it automatically adapts to the local smoothness of the function and attains the local adaptive minimax rate for estimating a derivative at a point. Keywords: Adaptivity; Block thresholding; Derivative; JamesStein estimator; Linear inverse problems; Minimax; Nonparametric regression; Vaguelets; Wavelets. AMS 1991 Subject Classifi...
Bayesian modelization of sparse sequences and maxisets for bayes rules
, 2003
"... In this paper, our aim is to estimate sparse sequences in the framework of the heteroscedastic white noise model. To model sparsity, we consider a Bayesian model composed of a mixture of a heavytailed density and a point mass at zero. To evaluate the performance of the Bayes rules (the median or th ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
(Show Context)
In this paper, our aim is to estimate sparse sequences in the framework of the heteroscedastic white noise model. To model sparsity, we consider a Bayesian model composed of a mixture of a heavytailed density and a point mass at zero. To evaluate the performance of the Bayes rules (the median or the mean of the posterior distribution), we exploit an alternative to the minimax setting developed in particular by Kerkyacharian and Picard: we determine the maxisets for each of these estimators. Using this approach, we compare the performance of Bayesian procedures with thresholding ones. Furthermore, the maxisets obtained can be viewed as weighted versions of weak lq spaces that naturally model sparsity. This remark leads us to investigate the following problem: how can we choose the prior parameters to build typical realizations of weighted weak lq spaces?