Results 1  10
of
26
A fully Bayesian approach to the parcelbased detectionestimation of brain activity in fMRI
, 2008
"... ..."
Estimating and projecting trends in HIV/AIDS generalized epidemics using incremental mixture importance sampling. Biometrics 66(4
, 2010
"... The Joint United Nations Programme on HIV/AIDS (UNAIDS) has decided to use Bayesian melding as the basis for its probabilistic projections of HIV prevalence in countries with generalized epidemics. This combines a mechanistic epidemiological model, prevalence data and expert opinion. Initially, the ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
The Joint United Nations Programme on HIV/AIDS (UNAIDS) has decided to use Bayesian melding as the basis for its probabilistic projections of HIV prevalence in countries with generalized epidemics. This combines a mechanistic epidemiological model, prevalence data and expert opinion. Initially, the posterior distribution was approximated by samplingimportanceresampling, which is simple to implement, easy to interpret, transparent to users and gave acceptable results for most countries. For some countries, however, this is not computationally efficient because the posterior distribution tends to be concentrated around nonlinear ridges and can also be multimodal. We propose instead Incremental Mixture Importance Sampling (IMIS), which iteratively builds up a better importance sampling function. This retains the simplicity and transparency of sampling importance resampling, but is much more efficient computationally. It also leads to a simple estimator of the integrated likelihood that is the basis for Bayesian model comparison and model averaging. In simulation experiments and on real data it outperformed both sampling importance resampling and three publicly available generic Markov chain Monte Carlo algorithms for this
Computational methods for Bayesian model choice
 In MaxEnt 2009 proceedings (A. I. of Physics
, 2009
"... In this note, we shortly survey some recent approaches on the approximation of the Bayes factor used in Bayesian hypothesis testing and in Bayesian model choice. In particular, we reassess importance sampling, harmonic mean sampling, and nested sampling from a unified perspective. ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In this note, we shortly survey some recent approaches on the approximation of the Bayes factor used in Bayesian hypothesis testing and in Bayesian model choice. In particular, we reassess importance sampling, harmonic mean sampling, and nested sampling from a unified perspective.
Monte Carlo Estimation of Minimax Regret with an Application to MDL Model Selection
, 2008
"... Minimum description length (MDL) model selection, in its modern NML formulation, involves a model complexity term which is equivalent to minimax/maximin regret. When the data are discretevalued, the complexity term is a logarithm of a sum of maximized likelihoods over all possible datasets. Becaus ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Minimum description length (MDL) model selection, in its modern NML formulation, involves a model complexity term which is equivalent to minimax/maximin regret. When the data are discretevalued, the complexity term is a logarithm of a sum of maximized likelihoods over all possible datasets. Because the sum has an exponential number of terms, its evaluation is in many cases intractable. In the continuous case, the sum is replaced by an integral for which a closed form is available in only a few cases. We present an approach based on Monte Carlo sampling, which works for all model classes, and gives strongly consistent estimators of the minimax regret. The estimates convergence almost surely to the correct value with increasing number of iterations. For the important class of Markov models, one of the presented estimators is particularly efficient: in empirical experiments, accuracy that is sufficient for model selection is usually achieved already on the first iteration, even for long sequences.
Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory
 Journal of Machine Learning Research
, 2010
"... In regular statistical models, the leaveoneout crossvalidation is asymptotically equivalent to the Akaike information criterion. However, since many learning machines are singular statistical models, the asymptotic behavior of the crossvalidation remains unknown. In previous studies, we establis ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In regular statistical models, the leaveoneout crossvalidation is asymptotically equivalent to the Akaike information criterion. However, since many learning machines are singular statistical models, the asymptotic behavior of the crossvalidation remains unknown. In previous studies, we established the singular learning theory and proposed a widely applicable information criterion, the expectation value of which is asymptotically equal to the average Bayes generalization loss. In the present paper, we theoretically compare the Bayes crossvalidation loss and the widely applicable information criterion and prove two theorems. First, the Bayes crossvalidation loss is asymptotically equivalent to the widely applicable information criterion as a random variable. Therefore, model selection and hyperparameter optimization using these two values are asymptotically equivalent. Second, the sum of the Bayes generalization error and the Bayes crossvalidation error is asymptotically equal to 2λ/n, where λ is the real log canonical threshold and n is the number of training samples. Therefore the relation between the crossvalidation error and the generalization error is determined by the algebraic geometrical structure of a learning machine. We also clarify that the deviance information criteria are different from the Bayes crossvalidation and the widely applicable information criterion.
Bayesian modelling for biological pathway annotation of gene expression pathway signatures
, 2010
"... We present Bayesian models and computational methods for the problem of matching predictions from molecular studies with known biological pathway databases the problem of pathway annotation of summary results of an experiment or observational study. In areas such as cancer genomics, linking quantif ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We present Bayesian models and computational methods for the problem of matching predictions from molecular studies with known biological pathway databases the problem of pathway annotation of summary results of an experiment or observational study. In areas such as cancer genomics, linking quantified, experimentally defined gene expression signatures with known biological pathway gene sets is essential to improving the understanding of the complexity of molecular pathways related to outcome. Our probabilistic pathway annotation (PROPA) analysis involves new models for formal assessment and rankings of pathways putatively linked to an experimental or observational phenotype, integrates qualitative biological information into the analysis, and generates coherent inferences on uncertainties about gene pathway membership that can inform the revision of pathway databases. Our analysis relies on simulationbased computation in highdimensional models, and introduces a novel extension of variational methods for computation of model evidence, or marginal likelihood functions, that are central to the comparison of multiple biological pathways. Examples highlight the methodology using both simulated and real data, and we develop detailed cases studies in breast cancer genomics involving hormonal pathways and pathway activities underlying cellular responses to lactic acidosis in breast cancer. The second study demonstrates the application of the method in decomposing the complexity of gene expressionbased predictions about interacting biological pathway activation from both experimental (in vitro) and observational (in vivo) human cancer data.
Approximating the marginal likelihood using copula
, 810
"... Model selection is an important activity in modern data analysis and the conventional Bayesian approach to this problem involves calculation of marginal likelihoods for different models, together with diagnostics which examine specific aspects of model fit. Calculating the marginal likelihood is a d ..."
Abstract
 Add to MetaCart
Model selection is an important activity in modern data analysis and the conventional Bayesian approach to this problem involves calculation of marginal likelihoods for different models, together with diagnostics which examine specific aspects of model fit. Calculating the marginal likelihood is a difficult computational problem. Our article proposes some extensions of the Laplace approximation for this task that are related to copula models and which are easy to apply. Variations which can be used both with and without simulation from the posterior distribution are considered, as well as use of the approximations with bridge sampling and in random effects models with a large number of latent variables. The use of a tcopula to obtain higher accuracy when multivariate dependence is not well captured by a Gaussian copula is also discussed.
Biometrics DOI: 10.1111/j.15410420.2010.01399.x Estimating and Projecting Trends in HIV/AIDS Generalized Epidemics Using Incremental Mixture Importance Sampling
"... Summary. The Joint United Nations Programme on HIV/AIDS (UNAIDS) has decided to use Bayesian melding as the basis for its probabilistic projections of HIV prevalence in countries with generalized epidemics. This combines a mechanistic epidemiological model, prevalence data, and expert opinion. Initi ..."
Abstract
 Add to MetaCart
Summary. The Joint United Nations Programme on HIV/AIDS (UNAIDS) has decided to use Bayesian melding as the basis for its probabilistic projections of HIV prevalence in countries with generalized epidemics. This combines a mechanistic epidemiological model, prevalence data, and expert opinion. Initially, the posterior distribution was approximated by samplingimportanceresampling, which is simple to implement, easy to interpret, transparent to users, and gave acceptable results for most countries. For some countries, however, this is not computationally efficient because the posterior distribution tends to be concentrated around nonlinear ridges and can also be multimodal. We propose instead incremental mixture importance sampling (IMIS), which iteratively builds up a better importance sampling function. This retains the simplicity and transparency of sampling importance resampling, but is much more efficient computationally. It also leads to a simple estimator of the integrated likelihood that is the basis for Bayesian model comparison and model averaging. In simulation experiments and on real data, it outperformed both sampling importance resampling and three publicly available generic Markov chain Monte Carlo algorithms for this kind of problem.