Results 1 
5 of
5
General empirical Bayes wavelet methods and exactly adaptive minimax estimation

, 2005
"... In many statistical problems, stochastic signals can be represented as a sequence of noisy wavelet coefficients. In this paper, we develop general empirical Bayes methods for the estimation of true signal. Our estimators approximate certain oracle separable rules and achieve adaptation to ideal risk ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
In many statistical problems, stochastic signals can be represented as a sequence of noisy wavelet coefficients. In this paper, we develop general empirical Bayes methods for the estimation of true signal. Our estimators approximate certain oracle separable rules and achieve adaptation to ideal risks and exact minimax risks in broad collections of classes of signals. In particular, our estimators are uniformly adaptive to the minimum risk of separable estimators and the exact minimax risks simultaneously in Besov balls of all smoothness and shape indices, and they are uniformly superefficient in convergence rates in all compact sets in Besov spaces with a finite secondary shape parameter. Furthermore, in classes nested between Besov balls of the same smoothness index, our estimators dominate threshold and James–Stein estimators within an infinitesimal fraction of the minimax risks. More general block empirical Bayes estimators are developed. Both white noise with drift and nonparametric regression are considered.
Non parametric empirical Bayes and compound decision approaches to estimation of a high dimensional vector of normal means
, 2007
"... We consider the classical problem of estimating a vector µ = (µ1,...,µn) based on independent observations Yi ∼ N(µi,1), i = 1,...,n. Suppose µi, i = 1,...,n are independent realizations from a completely unknown G. We suggest an easily computed estimator ˆµ, such that the ratio of its risk E(ˆµ − µ ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
We consider the classical problem of estimating a vector µ = (µ1,...,µn) based on independent observations Yi ∼ N(µi,1), i = 1,...,n. Suppose µi, i = 1,...,n are independent realizations from a completely unknown G. We suggest an easily computed estimator ˆµ, such that the ratio of its risk E(ˆµ − µ) 2 with that of the Bayes procedure approaches 1. A related compound decision result is also obtained. Our asymptotics is of a triangular array; that is, we allow the distribution G to depend on n. Thus, our theoretical asymptotic results are also meaningful in situations where the vector µ is sparse and the proportion of zero coordinates approaches 1. We demonstrate the performance of our estimator in simulations, emphasizing sparse setups. In “moderatelysparse ” situations, our procedure performs very well compared to known procedures tailored for sparse setups. It also adapts well to nonsparse situations.
GENERAL MAXIMUM LIKELIHOOD EMPIRICAL BAYES ESTIMATION OF NORMAL MEANS
, 908
"... We propose a general maximum likelihood empirical Bayes (GMLEB) method for the estimation of a mean vector based on observations with i.i.d. normal errors. We prove that under mild moment conditions on the unknown means, the average mean squared error (MSE) of the GMLEB is within an infinitesimal f ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We propose a general maximum likelihood empirical Bayes (GMLEB) method for the estimation of a mean vector based on observations with i.i.d. normal errors. We prove that under mild moment conditions on the unknown means, the average mean squared error (MSE) of the GMLEB is within an infinitesimal fraction of the minimum average MSE among all separable estimators which use a single deterministic estimating function on individual observations, provided that the risk is of greater order than (log n) 5 /n. We also prove that the GMLEB is uniformly approximately minimax in regular and weak ℓp balls when the order of the lengthnormalized norm of the unknown means is between (log n) κ1 /n
Tweedie’s Formula and Selection Bias
"... We suppose that the statistician observes some large number of estimates zi, each with its own unobserved expectation parameter µi. The largest few of the zi’s are likely to substantially overestimate their corresponding µi’s, this being an example of selection bias, or regression to the mean. Tweed ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We suppose that the statistician observes some large number of estimates zi, each with its own unobserved expectation parameter µi. The largest few of the zi’s are likely to substantially overestimate their corresponding µi’s, this being an example of selection bias, or regression to the mean. Tweedie’s formula, first reported by Robbins in 1956, offers a simple empirical Bayes approach for correcting selection bias. This paper investigates its merits and limitations. In addition to the methodology, Tweedie’s formula raises more general questions concerning empirical Bayes theory, discussed here as “relevance ” and “empirical Bayes information. ” There is a close connection between applications of the formula and James–Stein estimation. Keywords: Bayesian relevance, empirical Bayes information, James–Stein, false discovery rates, regret, winner’s curse