Results 1 
4 of
4
Estimating the integrated likelihood via posterior simulation using the harmonic mean identity
 Bayesian Statistics
, 2007
"... The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison a ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison and Bayesian testing is a ratio of integrated likelihoods, and the model weights in Bayesian model averaging are proportional to the integrated likelihoods. We consider the estimation of the integrated likelihood from posterior simulation output, aiming at a generic method that uses only the likelihoods from the posterior simulation iterations. The key is the harmonic mean identity, which says that the reciprocal of the integrated likelihood is equal to the posterior harmonic mean of the likelihood. The simplest estimator based on the identity is thus the harmonic mean of the likelihoods. While this is an unbiased and simulationconsistent estimator, its reciprocal can have infinite variance and so it is unstable in general. We describe two methods for stabilizing the harmonic mean estimator. In the first one, the parameter space is reduced in such a way that the modified estimator involves a harmonic mean of heaviertailed densities, thus resulting in a finite variance estimator. The resulting
Extreme Value Distribution Based Gene Selection Criteria for Discriminant Microarray Data Analysis Using Logistic Regression
"... One important issue commonly encountered in the analysis of microarray data is to decide which and how many genes should be selected for further studies. For discriminant microarray data analyses based on statistical models, such as the logistic regression models, gene selection can be accomplished ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
One important issue commonly encountered in the analysis of microarray data is to decide which and how many genes should be selected for further studies. For discriminant microarray data analyses based on statistical models, such as the logistic regression models, gene selection can be accomplished by a comparison of the maximum likelihood of the model given the real data, ˆL(DM), and the expected maximum likelihood of the model given an ensemble of surrogate data with randomly permuted label, ˆL(D0M). Typically, the computational burden for obtaining ˆL(D0M) is immense, often exceeding the limits of available computing resources by orders of magnitude. Here, we propose an approach that circumvents such heavy computations by mapping the simulation problem to an extremevalue problem. We present the derivation of an asymptotic distribution of the extremevalue as well as its mean, median, and variance. Using this distribution, we propose two gene selection criteria, and we apply them to two microarray datasets and three classification tasks for illustration. Key words: microarray, gene selection, extreme value distribution, logistic regression. 1.
and
, 2003
"... Foundation support from BCS0136193. The authors would like to thank Dale Poirier and University of California, Irvine economics seminar participants as well Bill McCausland and Université de Montréal, Dagenais seminar participants for We introduce the matrix exponential as a way of modelling spatia ..."
Abstract
 Add to MetaCart
Foundation support from BCS0136193. The authors would like to thank Dale Poirier and University of California, Irvine economics seminar participants as well Bill McCausland and Université de Montréal, Dagenais seminar participants for We introduce the matrix exponential as a way of modelling spatially dependent data. The matrix exponential spatial specification simplifies the loglikelihood allowing a closed form solution to the problem of maximum likelihood estimation, and greatly simplifies Bayesian estimation of the model. The matrix exponential spatial specification can produce estimates and inferences similar to those from conventional spatial autoregressive models, but has analytical, computational, and interpretive advantages. We present maximum likelihood and Bayesian approaches to estimation for this spatial model specification along with model diagnostic and comparison methods.
Lecture 6: Matrix Exponential Spatial models
"... Estimation of traditional spatial autoregressive (SAR) models requires nonlinear optimization for estimation and inference. The conventional spatial autoregressive approach introduces additional theoretical complexity relative to nonspatial autoregressive models and is difficult to implement in la ..."
Abstract
 Add to MetaCart
Estimation of traditional spatial autoregressive (SAR) models requires nonlinear optimization for estimation and inference. The conventional spatial autoregressive approach introduces additional theoretical complexity relative to nonspatial autoregressive models and is difficult to implement in large samples. We advocate use of a matrix exponential spatial