Results 1 
3 of
3
Empirical bayes estimates for largescale prediction problems. http://wwwstat.stanford.edu/~ckirby/brad/papers/2008EBestimates.pdf
, 2008
"... Classical prediction methods such as Fisher’s linear discriminant function were designed for smallscale problems, where the number of predictors N is much smaller than the number of observations n. Modern scientific devices often reverse this situation. A microarray analysis, for example, might inc ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
Classical prediction methods such as Fisher’s linear discriminant function were designed for smallscale problems, where the number of predictors N is much smaller than the number of observations n. Modern scientific devices often reverse this situation. A microarray analysis, for example, might include n = 100 subjects measured on N = 10, 000 genes, each of which is a potential predictor. This paper proposes an empirical Bayes approach to largescale prediction, where the optimum Bayes prediction rule is estimated employing the data from all the predictors. Microarray examples are used to illustrate the method. The results show a close connection with the shrunken centroids algorithm of Tibshirani et al. (2002), a frequentist regularization approach to largescale prediction, and also with false discovery rate theory.
Tweedie’s Formula and Selection Bias
"... We suppose that the statistician observes some large number of estimates zi, each with its own unobserved expectation parameter µi. The largest few of the zi’s are likely to substantially overestimate their corresponding µi’s, this being an example of selection bias, or regression to the mean. Tweed ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We suppose that the statistician observes some large number of estimates zi, each with its own unobserved expectation parameter µi. The largest few of the zi’s are likely to substantially overestimate their corresponding µi’s, this being an example of selection bias, or regression to the mean. Tweedie’s formula, first reported by Robbins in 1956, offers a simple empirical Bayes approach for correcting selection bias. This paper investigates its merits and limitations. In addition to the methodology, Tweedie’s formula raises more general questions concerning empirical Bayes theory, discussed here as “relevance ” and “empirical Bayes information. ” There is a close connection between applications of the formula and James–Stein estimation. Keywords: Bayesian relevance, empirical Bayes information, James–Stein, false discovery rates, regret, winner’s curse
Prediction of Ordered Random Effects in a Simple Small Area Model
 STATISTICA SINICA (IN PRESS)
, 2009
"... Prediction of a vector of ordered parameters or part of it arises naturally in the context of Small Area Estimation (SAE). For example, one may want to estimate the parameters associated with the top ten areas, the best or worst area, or a certain percentile. We use a simple SAE model to show that e ..."
Abstract
 Add to MetaCart
Prediction of a vector of ordered parameters or part of it arises naturally in the context of Small Area Estimation (SAE). For example, one may want to estimate the parameters associated with the top ten areas, the best or worst area, or a certain percentile. We use a simple SAE model to show that estimation of ordered parameters by the corresponding ordered estimates of each area separately does not yield good results with respect to MSE. Shrinkagetype predictors, with an appropriate amount of shrinkage for the particular problem of ordered parameters, are considerably better, and their performance is close to that of the optimal predictors, which cannot in general be computed explicitly.