Results 1  10
of
36
Bayes Factors
, 1995
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 1012 (70 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology and psychology.
Markov chains for exploring posterior distributions
 Annals of Statistics
, 1994
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 773 (6 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Bayes factors and model uncertainty
 DEPARTMENT OF STATISTICS, UNIVERSITY OFWASHINGTON
, 1993
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 90 (6 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of Pvalues, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications. The points we emphasize are: from Jeffreys's Bayesian point of view, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory; Bayes factors offer a way of evaluating evidence in favor ofa null hypothesis; Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis; Bayes factors are very general, and do not require alternative models to be nested; several techniques are available for computing Bayes factors, including asymptotic approximations which are easy to compute using the output from standard packages that maximize likelihoods; in "nonstandard " statistical models that do not satisfy common regularity conditions, it can be technically simpler to calculate Bayes factors than to derive nonBayesian significance
Methods for Approximating Integrals in Statistics with Special Emphasis on Bayesian Integration Problems
 Statistical Science
"... This paper is a survey of the major techniques and approaches available for the numerical approximation of integrals in statistics. We classify these into five broad categories; namely, asymptotic methods, importance sampling, adaptive importance sampling, multiple quadrature and Markov chain method ..."
Abstract

Cited by 35 (5 self)
 Add to MetaCart
This paper is a survey of the major techniques and approaches available for the numerical approximation of integrals in statistics. We classify these into five broad categories; namely, asymptotic methods, importance sampling, adaptive importance sampling, multiple quadrature and Markov chain methods. Each method is discussed giving an outline of the basic supporting theory and particular features of the technique. Conclusions are drawn concerning the relative merits of the methods based on the discussion and their application to three examples. The following broad recommendations are made. Asymptotic methods should only be considered in contexts where the integrand has a dominant peak with approximate ellipsoidal symmetry. Importance sampling, and preferably adaptive importance sampling, based on a multivariate Student should be used instead of asymptotics methods in such a context. Multiple quadrature, and in particular subregion adaptive integration, are the algorithms of choice for...
Issues in Bayesian Analysis of Neural Network Models
, 1998
"... This paper discusses these issues exploring the potentiality of Bayesian ideas in the analysis of NN models. Buntine and Weigend (1991) and MacKay (1992) have provided frameworks for their Bayesian analysis based on Gaussian approximations and Neal (1993) has applied hybrid Monte Carlo ideas. Ripley ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
This paper discusses these issues exploring the potentiality of Bayesian ideas in the analysis of NN models. Buntine and Weigend (1991) and MacKay (1992) have provided frameworks for their Bayesian analysis based on Gaussian approximations and Neal (1993) has applied hybrid Monte Carlo ideas. Ripley (1993) and Cheng and Titterington (1994) have dwelt on the power of these ideas, specially as far as interpretation and architecture selection are concerned. See MacKay (1995) for a recent review. From a statistical modeling point of view NN's are a special instance of mixture models. Many issues about posterior multimodality and computational strategies in NN modeling are of relevance in the wider class of mixture models. Related recent references in the Bayesian literature on mixture models include Diebolt and Robert (1994), Escobar and West (1994), Robert and Mengersen (1995), Roeder and Wasserman (1995), West (1994), West and Cao (1993), West, Muller and Escobar (1994), and West and Turner (1994). We concentrate on approximation problems, though many of our suggestions can be translated to other areas. For those problems, NN's are viewed as highly nonlinear (semiparametric) approximators, where parameters are typically estimated by least squares. Applications of interest for practicioners include nonlinear regression, stochastic optimisation and regression metamodels for simulation output. The main issue we address here is how to undertake a Bayesian analysis of a NN model, and the uses of it we may make. Our contributions include: an evaluation of computational approaches to Bayesian analysis of NN models, including a novel Markov chain Monte Carlo scheme; a suggestion of a scheme for handling a variable architecture model and a scheme for combining NN models with more ...
Geometric Ergodicity of Gibbs and Block Gibbs Samplers for a Hierarchical Random Effects Model
, 1998
"... We consider fixed scan Gibbs and block Gibbs samplers for a Bayesian hierarchical random effects model with proper conjugate priors. A drift condition given in Meyn and Tweedie (1993, Chapter 15) is used to show that these Markov chains are geometrically ergodic. Showing that a Gibbs sampler is geom ..."
Abstract

Cited by 28 (8 self)
 Add to MetaCart
We consider fixed scan Gibbs and block Gibbs samplers for a Bayesian hierarchical random effects model with proper conjugate priors. A drift condition given in Meyn and Tweedie (1993, Chapter 15) is used to show that these Markov chains are geometrically ergodic. Showing that a Gibbs sampler is geometrically ergodic is the first step towards establishing central limit theorems, which can be used to approximate the error associated with Monte Carlo estimates of posterior quantities of interest. Thus, our results will be of practical interest to researchers using these Gibbs samplers for Bayesian data analysis. Key words and phrases: Bayesian model, Central limit theorem, Drift condition, Markov chain, Monte Carlo, Rate of convergence, Variance Components AMS 1991 subject classifications: Primary 60J27, secondary 62F15 1 Introduction Gelfand and Smith (1990, Section 3.4) introduced the Gibbs sampler for the hierarchical oneway random effects model with proper conjugate priors. Rosen...
Approximating Hidden Gaussian Markov Random Fields
 JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B
, 2003
"... This paper discusses how to construct approximations to a unimodal hidden Gaussian Markov random field on a graph of dimension n when the likelihood consists of mutually independent data. We demonstrate that a class of nonGaussian approximations can be constructed for a wide range of likelihood ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
This paper discusses how to construct approximations to a unimodal hidden Gaussian Markov random field on a graph of dimension n when the likelihood consists of mutually independent data. We demonstrate that a class of nonGaussian approximations can be constructed for a wide range of likelihood models. They have the appealing properties that exact samples can be drawn from them, the normalisation constant is computable, and the computational complexity is only O(n 2 ) in the spatial case. The nonGaussian approximations are refined versions of a Gaussian approximation. The latter serves well if the likelihood is nearGaussian, but it is not sufficiently accurate when the likelihood is not nearGaussian or if n is large. The accuracy of our approximations can be tuned by intuitive parameters to near any precision. We apply
Bayesian analysis of duration models: an application to Chapter 11 bankruptcy
 Economics Letters
, 1999
"... We develop a Bayesian approach to estimating duration models and apply it to the default data of high yield bonds. The instantaneous probability of a firm completing Chapter 11 increases up to the twentyfirst month in Chapter 11 then declines towards zero. © 1999 Elsevier Science S.A. All rights re ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We develop a Bayesian approach to estimating duration models and apply it to the default data of high yield bonds. The instantaneous probability of a firm completing Chapter 11 increases up to the twentyfirst month in Chapter 11 then declines towards zero. © 1999 Elsevier Science S.A. All rights reserved.
Hierarchical Models: A Current Computational Perspective
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 2000
"... Hierarchical models (HMs) provide a flexible framework for modeling data. The ongoing development of techniques like the EM algorithm and Markov chain Monte Carlo has enabled statisticians to make use of increasingly more complicated HMs over the last few decades. In this article, we consider Bay ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Hierarchical models (HMs) provide a flexible framework for modeling data. The ongoing development of techniques like the EM algorithm and Markov chain Monte Carlo has enabled statisticians to make use of increasingly more complicated HMs over the last few decades. In this article, we consider Bayesian and frequentist versions of a general, twostage HM, and describe several examples from the literature that illustrate its versatility. Some key aspects of the computational techniques that are currently used in conjunction with this HM are then examined in the context of McCullagh and Nelder's (1989) salamander data. Several areas that are ripe for new research are identified.
Laplace's method approximations for probabilistic inference in belief networks with continuous variables
 In de Mantaras
, 1994
"... Laplace's method, a family of asymptotic methods used to approximate integrals, is presented as a potential candidate for the tool box of techniques used for knowledge acquisition and probabilistic inference in belief networks with continuous variables. This technique approximates posterior mom ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Laplace's method, a family of asymptotic methods used to approximate integrals, is presented as a potential candidate for the tool box of techniques used for knowledge acquisition and probabilistic inference in belief networks with continuous variables. This technique approximates posterior moments and marginal posterior distributions with reasonable accuracy [errors are O(n,2) for posterior means] in many interesting cases. The method also seems promising for computing approximations for Bayes factors for use in the context of model selection, model uncertainty and mixtures of pdfs. The limitations, regularity conditions and computational di culties for the implementation of Laplace's method are comparable to those associated with the methods of maximum likelihood and posterior mode analysis. 1