Results 1  10
of
22
General empirical Bayes wavelet methods and exactly adaptive minimax estimation

, 2005
"... In many statistical problems, stochastic signals can be represented as a sequence of noisy wavelet coefficients. In this paper, we develop general empirical Bayes methods for the estimation of true signal. Our estimators approximate certain oracle separable rules and achieve adaptation to ideal risk ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
In many statistical problems, stochastic signals can be represented as a sequence of noisy wavelet coefficients. In this paper, we develop general empirical Bayes methods for the estimation of true signal. Our estimators approximate certain oracle separable rules and achieve adaptation to ideal risks and exact minimax risks in broad collections of classes of signals. In particular, our estimators are uniformly adaptive to the minimum risk of separable estimators and the exact minimax risks simultaneously in Besov balls of all smoothness and shape indices, and they are uniformly superefficient in convergence rates in all compact sets in Besov spaces with a finite secondary shape parameter. Furthermore, in classes nested between Besov balls of the same smoothness index, our estimators dominate threshold and James–Stein estimators within an infinitesimal fraction of the minimax risks. More general block empirical Bayes estimators are developed. Both white noise with drift and nonparametric regression are considered.
GENERAL MAXIMUM LIKELIHOOD EMPIRICAL BAYES ESTIMATION OF NORMAL MEANS
, 908
"... We propose a general maximum likelihood empirical Bayes (GMLEB) method for the estimation of a mean vector based on observations with i.i.d. normal errors. We prove that under mild moment conditions on the unknown means, the average mean squared error (MSE) of the GMLEB is within an infinitesimal f ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
(Show Context)
We propose a general maximum likelihood empirical Bayes (GMLEB) method for the estimation of a mean vector based on observations with i.i.d. normal errors. We prove that under mild moment conditions on the unknown means, the average mean squared error (MSE) of the GMLEB is within an infinitesimal fraction of the minimum average MSE among all separable estimators which use a single deterministic estimating function on individual observations, provided that the risk is of greater order than (log n) 5 /n. We also prove that the GMLEB is uniformly approximately minimax in regular and weak ℓp balls when the order of the lengthnormalized norm of the unknown means is between (log n) κ1 /n
Multivariate Nonparametric Regression Using Lifting
, 2004
"... This article develops three methods for the multiscale analysis of irregularly spaced data based on the recently developed lifting paradigm by "lifting one coefficient at a time". The concept of scale still exists within these transforms but as a continuous quantity rather than dyadic leve ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
This article develops three methods for the multiscale analysis of irregularly spaced data based on the recently developed lifting paradigm by "lifting one coefficient at a time". The concept of scale still exists within these transforms but as a continuous quantity rather than dyadic levels. We develop empirical Bayes methods that take account of the continuous nature of the scale. We apply our new methods to the problems of estimation of krill density and rail arrival delays. We demonstrate good performance in a simulation study on new twodimensional analogues of the wellknown Blocks, Bumps, Doppler and Heavisine and a new piecewise linear function called maartenfunc
Bivariate hard thresholding in wavelet function estimation
, 2006
"... We propose a generic bivariate hard thresholding estimator of the discrete wavelet coefficients of a function contaminated with i.i.d. Gaussian noise. We demonstrate its good risk properties in a motivating example, and derive upper bounds for its meansquare error. Motivated by the clustering of la ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
We propose a generic bivariate hard thresholding estimator of the discrete wavelet coefficients of a function contaminated with i.i.d. Gaussian noise. We demonstrate its good risk properties in a motivating example, and derive upper bounds for its meansquare error. Motivated by the clustering of large wavelet coefficients in reallife signals, we propose two wavelet denoising algorithms, both of which use specific instances of our bivariate estimator. The BABTE algorithm uses basis averaging, and the BITUP algorithm uses the coupling of “parents” and “children” in the wavelet coefficient tree. We prove the L2 nearoptimality of both algorithms over the usual range of Besov spaces, and demonstrate their excellent finitesample performance. Finally, we propose a robust and effective technique for choosing the parameters of BITUP in a datadriven way.
Adaptive lifting for nonparametric regression
 In preparation Okabe
, 2004
"... Many wavelet shrinkage methods assume that the data are observed on an equally spaced grid of length of the form 2 J for some J. These methods require serious modification or preprocessed data to cope with irregularly spaced data. The lifting scheme is a recent mathematical innovation that obtains a ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
Many wavelet shrinkage methods assume that the data are observed on an equally spaced grid of length of the form 2 J for some J. These methods require serious modification or preprocessed data to cope with irregularly spaced data. The lifting scheme is a recent mathematical innovation that obtains a multiscale analysis for irregularly spaced data. A key lifting component is the “predict ” step where a prediction of a data point is made. The residual from the prediction is stored and can be thought of as a wavelet coefficient. This article exploits the flexibility of lifting by adaptively choosing the kind of prediction according to a criterion. In this way the smoothness of the underlying ‘wavelet ’ can be adapted to the local properties of the function. Multiple observations at a point can readily be handled by lifting through a suitable choice of prediction. We adapt existing shrinkage rules to work with our adaptive lifting methods. We use simulation to demonstrate the improved sparsity of our techniques and improved regression performance when compared to nonwavelet methods suitable for irregular data. We also exhibit the benefits of our adaptive lifting on the real inductance plethysmography and motorcycle data.
Large variance Gaussian priors in Bayesian nonparametric estimation: a maxiset approach
 Mathematical Methods of Statistics
, 2006
"... In this paper we compare wavelet Bayesian rules taking into account the sparsity of the signal with priors which are combinations of a Dirac mass with a standard distribution properly normalized. To perform these comparisons, we take the maxiset point of view: i. e. we consider the set of functions ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
In this paper we compare wavelet Bayesian rules taking into account the sparsity of the signal with priors which are combinations of a Dirac mass with a standard distribution properly normalized. To perform these comparisons, we take the maxiset point of view: i. e. we consider the set of functions which are well estimated (at a prescribed rate) by each procedure. We especially consider the standard cases of Gaussian and heavytailed priors. We show that if heavytailed priors have extremely good maxiset behavior compared to traditional Gaussian priors, considering large variance Gaussian priors (LVGP) leads to equally successful maxiset behavior. Moreover, these LVGP can be constructed in an adaptive way. We also show, using comparative simulations results that large variance Gaussian priors have very good numerical performances, confirming the maxiset prediction, and providing the advantage of a high simplicity from the computational point of view. 1
On the sum of t and Gaussian random variables
 Statistics & Probability Letters
, 2006
"... ..."