Results 1  10
of
14
Empirical Bayes Selection of Wavelet Thresholds
 ANN. STATIST
, 2005
"... This paper explores a class of empirical Bayes methods for leveldependent threshold selection in wavelet shrinkage. The prior considered for each wavelet coefficient is a mixture of an atom of probability at zero and a heavytailed density. The mixing weight, or sparsity parameter, for each lev ..."
Abstract

Cited by 86 (3 self)
 Add to MetaCart
This paper explores a class of empirical Bayes methods for leveldependent threshold selection in wavelet shrinkage. The prior considered for each wavelet coefficient is a mixture of an atom of probability at zero and a heavytailed density. The mixing weight, or sparsity parameter, for each level of the transform is chosen by marginal maximum likelihood. If estimation
Rates of convergence and adaption over Besov spaces under pointwise risk, Statistica Sinica 13
, 2003
"... Abstract: Function estimation over the Besov spaces under pointwise ℓ r (1 ≤ r< ∞) risks is considered. Minimax rates of convergence are derived using a constrained risk inequality and wavelets. Adaptation under pointwise risks is also considered. Sharp lower bounds on the cost of adaptation are obt ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract: Function estimation over the Besov spaces under pointwise ℓ r (1 ≤ r< ∞) risks is considered. Minimax rates of convergence are derived using a constrained risk inequality and wavelets. Adaptation under pointwise risks is also considered. Sharp lower bounds on the cost of adaptation are obtained and are shown to be attainable by a wavelet estimator. The results demonstrate important differences between the minimax properties under pointwise and global risk measures. The minimax rates and adaptation for estimating derivatives under pointwise risks are also presented. A general ℓ rrisk oracle inequality is developed for the proofs of the main results. Key words and phrases: Adaptability, adaptive estimation, Besov spaces, constrained risk inequality, minimax estimation, nonparametric functional estimation,
ESTIMATION OF THE DENSITY OF REGRESSION ERRORS
, 2004
"... Estimation of the density of regression errors is a fundamental issue in regression analysis and it is typically explored via a parametric approach. This article uses a nonparametric approach with the mean integrated squared error (MISE) criterion. It solves a longstanding problem, formulated two d ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Estimation of the density of regression errors is a fundamental issue in regression analysis and it is typically explored via a parametric approach. This article uses a nonparametric approach with the mean integrated squared error (MISE) criterion. It solves a longstanding problem, formulated two decades ago by Mark Pinsker, about estimation of a nonparametric error density in a nonparametric regression setting with the accuracy of an oracle that knows the underlying regression errors. The solution implies that, under a mild assumption on the differentiability of the design density and regression function, the MISE of a datadriven error density estimator attains minimax rates and sharp constants known for the case of directly observed regression errors. The result holds for error densities with finite and infinite supports. Some extensions of this result for more general heteroscedastic models with possibly dependent errors and predictors are also obtained; in the latter case the marginal error density is estimated. In all considered cases a blockwiseshrinking Efromovich– Pinsker density estimate, based on pluggedin residuals, is used. The obtained results imply a theoretical justification of a customary practice in applied regression analysis to consider residuals as proxies for underlying regression errors. Numerical and real examples are presented and discussed, and the SPLUS software is available. 1. Introduction. A
Large variance Gaussian priors in Bayesian nonparametric estimation: a maxiset approach
 Mathematical Methods of Statistics
, 2006
"... In this paper we compare wavelet Bayesian rules taking into account the sparsity of the signal with priors which are combinations of a Dirac mass with a standard distribution properly normalized. To perform these comparisons, we take the maxiset point of view: i. e. we consider the set of functions ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
In this paper we compare wavelet Bayesian rules taking into account the sparsity of the signal with priors which are combinations of a Dirac mass with a standard distribution properly normalized. To perform these comparisons, we take the maxiset point of view: i. e. we consider the set of functions which are well estimated (at a prescribed rate) by each procedure. We especially consider the standard cases of Gaussian and heavytailed priors. We show that if heavytailed priors have extremely good maxiset behavior compared to traditional Gaussian priors, considering large variance Gaussian priors (LVGP) leads to equally successful maxiset behavior. Moreover, these LVGP can be constructed in an adaptive way. We also show, using comparative simulations results that large variance Gaussian priors have very good numerical performances, confirming the maxiset prediction, and providing the advantage of a high simplicity from the computational point of view. 1
Adapting to unknown smoothness by aggregation of thresholded wavelet estimators
, 2006
"... We study the performances of an adaptive procedure based on a convex combination, with datadriven weights, of termbyterm thresholded wavelet estimators. For the bounded regression model, with random uniform design, and the nonparametric density model, we show that the resulting estimator is optim ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We study the performances of an adaptive procedure based on a convex combination, with datadriven weights, of termbyterm thresholded wavelet estimators. For the bounded regression model, with random uniform design, and the nonparametric density model, we show that the resulting estimator is optimal in the minimax sense over all Besov balls under the L 2 risk, without any logarithm factor.
GENERAL MAXIMUM LIKELIHOOD EMPIRICAL BAYES ESTIMATION OF NORMAL MEANS
, 908
"... We propose a general maximum likelihood empirical Bayes (GMLEB) method for the estimation of a mean vector based on observations with i.i.d. normal errors. We prove that under mild moment conditions on the unknown means, the average mean squared error (MSE) of the GMLEB is within an infinitesimal f ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We propose a general maximum likelihood empirical Bayes (GMLEB) method for the estimation of a mean vector based on observations with i.i.d. normal errors. We prove that under mild moment conditions on the unknown means, the average mean squared error (MSE) of the GMLEB is within an infinitesimal fraction of the minimum average MSE among all separable estimators which use a single deterministic estimating function on individual observations, provided that the risk is of greater order than (log n) 5 /n. We also prove that the GMLEB is uniformly approximately minimax in regular and weak ℓp balls when the order of the lengthnormalized norm of the unknown means is between (log n) κ1 /n
ASYMPTOTIC EQUIVALENCE AND ADAPTIVE ESTIMATION FOR ROBUST NONPARAMETRIC REGRESSION
, 2009
"... Asymptotic equivalence theory developed in the literature so far are only for bounded loss functions. This limits the potential applications of the theory because many commonly used loss functions in statistical inference are unbounded. In this paper we develop asymptotic equivalence results for rob ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Asymptotic equivalence theory developed in the literature so far are only for bounded loss functions. This limits the potential applications of the theory because many commonly used loss functions in statistical inference are unbounded. In this paper we develop asymptotic equivalence results for robust nonparametric regression with unbounded loss functions. The results imply that all the Gaussian nonparametric regression procedures can be robustified in a unified way. A key step in our equivalence argument is to bin the data and then take the median of each bin. The asymptotic equivalence results have significant practical implications. To illustrate the general principles of the equivalence argument we consider two important nonparametric inference problems: robust estimation of the regression function and the estimation of a quadratic functional. In both cases easily implementable procedures are constructed and are shown to enjoy simultaneously a high degree of robustness and adaptivity. Other problems such as construction of confidence sets and nonparametric hypothesis testing can be handled in a similar fashion.
On Information Pooling, Adaptability And Superefficiency in Nonparametric Function Estimation
"... The connections between information pooling and adaptability as well as superefficiency are considered. Separable rules, which figure prominently in wavelet and other orthogonal series methods, are shown to lack adaptability; they are necessarily not rateadaptive. A sharp lower bound on the cost of ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The connections between information pooling and adaptability as well as superefficiency are considered. Separable rules, which figure prominently in wavelet and other orthogonal series methods, are shown to lack adaptability; they are necessarily not rateadaptive. A sharp lower bound on the cost of adaptation for separable rules is obtained. We show that adaptability is achieved through information pooling. A tight lower bound on the amount of information pooling required for achieving rateoptimal adaptation is given. Furthermore, in a sharp contrast to the separable rules, it is shown that adaptive nonseparable estimators can be superefficient at every point in the parameter spaces. The results demonstrate that information pooling is the key to increasing estimation precision as well as achieving adaptability and even superefficiency.
EXACT MINIMAX ESTIMATION OF THE PREDICTIVE DENSITY IN SPARSE GAUSSIAN MODELS
, 1211
"... We consider estimating the predictive density under KullbackLeibler loss in an ℓ0 sparse Gaussian sequence model. Explicit expressions of the first order minimax risk along with its exact constant, asymptotically least favorable priors and optimal predictive density estimates are derived. Compared ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We consider estimating the predictive density under KullbackLeibler loss in an ℓ0 sparse Gaussian sequence model. Explicit expressions of the first order minimax risk along with its exact constant, asymptotically least favorable priors and optimal predictive density estimates are derived. Compared to the sparse recovery results involving point estimation of the normal mean, new decision theoretic phenomena are seen here. Suboptimal performance of the class of plugin density estimates reflects the predictive nature of the problem and optimal strategies need diversification of the future risk. We find that minimax optimal strategies lie outside the Gaussian family but can be constructed with threshold predictive density estimates. Novel minimax techniques involving simultaneous calibration of the sparsity adjustment and the risk diversification mechanisms are used to design optimal predictive density estimates. 1. Introduction and
STATISTICS IN MEDICINE Statist. Med. 2004; 00:1–17 Prepared using simauth.cls [Version: 2002/09/18 v1.11] A multiscale method for disease mapping in spatial epidemiology
"... The effects of spatial scale in disease mapping are wellrecognized, in that the information conveyed by such maps varies with scale. Here we provide an inferential framework, in the context of tract count data, for describing the distribution of relative risk simultaneously across a hierarchy of mu ..."
Abstract
 Add to MetaCart
The effects of spatial scale in disease mapping are wellrecognized, in that the information conveyed by such maps varies with scale. Here we provide an inferential framework, in the context of tract count data, for describing the distribution of relative risk simultaneously across a hierarchy of multiple scales. In particular, we offer a multiscale extension of the canonical standardized mortality ratio (SMR), consisting of Bayesian posteriorbased strategies for both estimation and characterization of uncertainty. As a result, a hierarchy of informative disease and confidence maps can be produced, without the need to first try to identify a single appropriate scale of analysis. We explore the behavior of the proposed methodology in a small simulation study, and we illustrate its usage through an application to data on gastric cancer in Tuscany. Copyright c ○ 2004 John Wiley & Sons, Ltd. 1.