Results 1  10
of
10
Empirical Bayes Selection of Wavelet Thresholds
 ANN. STATIST
, 2005
"... This paper explores a class of empirical Bayes methods for leveldependent threshold selection in wavelet shrinkage. The prior considered for each wavelet coefficient is a mixture of an atom of probability at zero and a heavytailed density. The mixing weight, or sparsity parameter, for each lev ..."
Abstract

Cited by 87 (3 self)
 Add to MetaCart
This paper explores a class of empirical Bayes methods for leveldependent threshold selection in wavelet shrinkage. The prior considered for each wavelet coefficient is a mixture of an atom of probability at zero and a heavytailed density. The mixing weight, or sparsity parameter, for each level of the transform is chosen by marginal maximum likelihood. If estimation
Bayesian modelization of sparse sequences and maxisets for bayes rules
, 2003
"... In this paper, our aim is to estimate sparse sequences in the framework of the heteroscedastic white noise model. To model sparsity, we consider a Bayesian model composed of a mixture of a heavytailed density and a point mass at zero. To evaluate the performance of the Bayes rules (the median or th ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
In this paper, our aim is to estimate sparse sequences in the framework of the heteroscedastic white noise model. To model sparsity, we consider a Bayesian model composed of a mixture of a heavytailed density and a point mass at zero. To evaluate the performance of the Bayes rules (the median or the mean of the posterior distribution), we exploit an alternative to the minimax setting developed in particular by Kerkyacharian and Picard: we determine the maxisets for each of these estimators. Using this approach, we compare the performance of Bayesian procedures with thresholding ones. Furthermore, the maxisets obtained can be viewed as weighted versions of weak lq spaces that naturally model sparsity. This remark leads us to investigate the following problem: how can we choose the prior parameters to build typical realizations of weighted weak lq spaces?
Frequentist optimality of Bayesian wavelet shrinkage rules for Gaussian and nonGaussian
, 2005
"... The present paper investigates theoretical performance of various Bayesian wavelet shrinkage rules in a nonparametric regression model with i.i.d. errors which are not necessarily normally distributed. The main purpose is comparison of various Bayesian models in terms of their frequentist asymptotic ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
The present paper investigates theoretical performance of various Bayesian wavelet shrinkage rules in a nonparametric regression model with i.i.d. errors which are not necessarily normally distributed. The main purpose is comparison of various Bayesian models in terms of their frequentist asymptotic optimality in Sobolev and Besov spaces. We establish a relationship between hyperparameters, verify that the majority of Bayesian models studied so far achieve theoretical optimality, state which Bayesian models cannot achieve optimal convergence rate and explain why it happens. 1. Introduction. Bayesian
Large variance Gaussian priors in Bayesian nonparametric estimation: a maxiset approach
 Mathematical Methods of Statistics
, 2006
"... In this paper we compare wavelet Bayesian rules taking into account the sparsity of the signal with priors which are combinations of a Dirac mass with a standard distribution properly normalized. To perform these comparisons, we take the maxiset point of view: i. e. we consider the set of functions ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
In this paper we compare wavelet Bayesian rules taking into account the sparsity of the signal with priors which are combinations of a Dirac mass with a standard distribution properly normalized. To perform these comparisons, we take the maxiset point of view: i. e. we consider the set of functions which are well estimated (at a prescribed rate) by each procedure. We especially consider the standard cases of Gaussian and heavytailed priors. We show that if heavytailed priors have extremely good maxiset behavior compared to traditional Gaussian priors, considering large variance Gaussian priors (LVGP) leads to equally successful maxiset behavior. Moreover, these LVGP can be constructed in an adaptive way. We also show, using comparative simulations results that large variance Gaussian priors have very good numerical performances, confirming the maxiset prediction, and providing the advantage of a high simplicity from the computational point of view. 1
Frequentist optimality of Bayes factor estimators in wavelet regression models
 Statist. Sinica
, 2007
"... Abstract: We investigate the theoretical performance of Bayes factor estimators in wavelet regression models with independent and identically distributed errors that are not necessarily normally distributed. We compare these estimators in terms of their frequentist optimality in Besov spaces for a w ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract: We investigate the theoretical performance of Bayes factor estimators in wavelet regression models with independent and identically distributed errors that are not necessarily normally distributed. We compare these estimators in terms of their frequentist optimality in Besov spaces for a wide variety of error and prior distributions. Furthermore, we provide sufficient conditions that determine whether the underlying regression function belongs to a Besov space apriori with probability one. We also study an adaptive estimator by considering an empirical Bayes estimation procedure of the Bayes factor estimator for a certain combination of error and prior distributions. Simulated examples are used to illustrate the performance of the empirical Bayes estimation procedure based on the proposed Bayes factor estimator, and compared with two recently proposed empirical Bayes estimators. An application to a dataset that was collected in an anaesthesiological study is also presented. Key words and phrases: Bayesian inference, Besov spaces, empirical Bayes inference, nonparametric regression, optimality, wavelets. 1.
Bayesian Modeling in the Wavelet Domain
, 2004
"... Wavelets are the building blocks of wavelet transforms the same way that the functions e^inx are the building blocks of the ordinary Fourier transform. But in contrast to sines and cosines, wavelets can be (or almost can be) supported on an arbitrarily small closed interval. This feature makes wavel ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Wavelets are the building blocks of wavelet transforms the same way that the functions e^inx are the building blocks of the ordinary Fourier transform. But in contrast to sines and cosines, wavelets can be (or almost can be) supported on an arbitrarily small closed interval. This feature makes wavelets a very powerful tool in dealing with phenomena that change rapidly in time. In many statistical applications, there is a need for procedures to (i) adapt to data and (ii) use prior information. The interface of wavelets and the Bayesian paradigm provides a natural terrain for both of these goals. In this chapter, the authors provide an overview of the current status of research involving Bayesian inference in wavelet nonparametric problems. Two applications, one in functional data analysis (FDA) and the second in geoscience are discussed in more detail.
priors in a Bayesian nonparametric setting. ∗
, 2004
"... to choosing priors in a Bayesian nonparametric setting ..."
Maxibay_rev.pdf" Large variance Gaussian priors in Bayesian nonparametric estimation: a maxiset approach ∗
, 2005
"... In this paper we compare wavelet Bayesian rules taking into account the sparsity of the signal with priors which are combinations of a Dirac mass with a standard distribution properly normalized. To perform these comparisons, we take the maxiset point of view: i. e. we consider the set of functions ..."
Abstract
 Add to MetaCart
In this paper we compare wavelet Bayesian rules taking into account the sparsity of the signal with priors which are combinations of a Dirac mass with a standard distribution properly normalized. To perform these comparisons, we take the maxiset point of view: i. e. we consider the set of functions which are well estimated (at a prescribed rate) by each procedure. We especially consider the standard cases of Gaussian and heavytailed priors. We show that if heavytailed priors have extremely good maxiset behavior compared to traditional Gaussian priors, considering large variance Gaussian priors (LVGP) leads to equally successful maxiset behavior. Moreover, these LVGP can be constructed in an adaptive way. We also show, using comparative simulations results that large variance Gaussian priors have very good numerical performances, confirming the maxiset prediction, and providing the advantage of a high simplicity from the computational point of view. 1
© Institute of Mathematical Statistics, 2005 EMPIRICAL BAYES SELECTION OF WAVELET THRESHOLDS
"... This paper explores a class of empirical Bayes methods for leveldependent threshold selection in wavelet shrinkage. The prior considered for each wavelet coefficient is a mixture of an atom of probability at zero and a heavytailed density. The mixing weight, or sparsity parameter, for each level of ..."
Abstract
 Add to MetaCart
This paper explores a class of empirical Bayes methods for leveldependent threshold selection in wavelet shrinkage. The prior considered for each wavelet coefficient is a mixture of an atom of probability at zero and a heavytailed density. The mixing weight, or sparsity parameter, for each level of the transform is chosen by marginal maximum likelihood. If estimation is carried out using the posterior median, this is a random thresholding procedure; the estimation can also be carried out using other thresholding rules with the same threshold. Details of the calculations needed for implementing the procedure are included. In practice, the estimates are quick to compute and there is software available. Simulations on the standard model functions show excellent performance, and applications to data drawn from various fields of application are used to explore the practical performance of the approach. By using a general result on the risk of the corresponding marginal maximum likelihood approach for a single sequence, overall bounds on