Results 1  10
of
37
MCMC Methods for Computing Bayes Factors: A Comparative Review
 Journal of the American Statistical Association
, 2000
"... this paper we review several of these methods, and subsequently compare them in the context of two examples, the first a simple regression example, and the second a much more challenging hierarchical longitudinal model of the kind often encountered in biostatistical practice. We find that the joint ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
this paper we review several of these methods, and subsequently compare them in the context of two examples, the first a simple regression example, and the second a much more challenging hierarchical longitudinal model of the kind often encountered in biostatistical practice. We find that the joint modelparameter space search methods perform adequately but can be difficult to program and tune, while the marginal likelihood methods are often less troublesome and require less in the way of additional coding. Our results suggest that the latter methods may be most appropriate for practitioners working in many standard model choice settings, while the former remain important for comparing large numbers of models, or models whose parameters cannot be easily updated in relatively few blocks. We caution however that all of the methods we compare require significant human and computer effort, suggesting that less formal Bayesian model choice methods may offer a more realistic alternative in many cases.
Deviance Information Criterion for Comparing Stochastic Volatility Models
 Journal of Business and Economic Statistics
, 2002
"... Bayesian methods have been efficient in estimating parameters of stochastic volatility models for analyzing financial time series. Recent advances made it possible to fit stochastic volatility models of increasing complexity, including covariates, leverage effects, jump components and heavytailed d ..."
Abstract

Cited by 33 (9 self)
 Add to MetaCart
(Show Context)
Bayesian methods have been efficient in estimating parameters of stochastic volatility models for analyzing financial time series. Recent advances made it possible to fit stochastic volatility models of increasing complexity, including covariates, leverage effects, jump components and heavytailed distributions. However, a formal model comparison via Bayes factors remains difficult. The main objective of this paper is to demonstrate that model selection is more easily performed using the deviance information criterion (DIC). It combines a Bayesian measureoffit with a measure of model complexity. We illustrate the performance of DIC in discriminating between various different stochastic volatility models using simulated data and daily returns data on the S&P100 index.
The Strength of Statistical Evidence for Composite Hypotheses: Inference to the Best Explanation
, 2010
"... A general function to quantify the weight of evidence in a sample of data for one hypothesis over another is derived from the law of likelihood and from a statistical formalization of inference to the best explanation. For a fixed parameter of interest, the resulting weight of evidence that favors o ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
A general function to quantify the weight of evidence in a sample of data for one hypothesis over another is derived from the law of likelihood and from a statistical formalization of inference to the best explanation. For a fixed parameter of interest, the resulting weight of evidence that favors one composite hypothesis over another is the likelihood ratio using the parameter value consistent with each hypothesis that maximizes the likelihood function over the parameter of interest. Since the weight of evidence is generally only known up to a nuisance parameter, it is approximated by replacing the likelihood function with a reduced likelihood function on the interest parameter space. Unlike the Bayes factor and unlike the pvalue under interpretations that extend its scope, the weight of evidence is coherent in the sense that it cannot support a hypothesis over any hypothesis that it entails. Further, when comparing the hypothesis that the parameter lies outside a nontrivial interval to the hypothesis that it lies within the interval, the proposed method of weighing evidence almost always asymptotically favors the correct hypothesis
Confidence Limits: What Is The Problem? Is There The Solution?
, 2000
"... This contribution to the debate on confidence limits focuses mostly on the case of measurements with `open likelihood', in the sense that it is defined in the text. I will show that, though a priorfree assessment of confidence is, in general, not possible, still a search result can be reported ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
This contribution to the debate on confidence limits focuses mostly on the case of measurements with `open likelihood', in the sense that it is defined in the text. I will show that, though a priorfree assessment of confidence is, in general, not possible, still a search result can be reported in a mostly unbiased and efficient way, which satisfies some desiderata which I believe are shared by the people interested in the subject. The simpler case of `closed likelihood' will also be treated, and I will discuss why a uniform prior on a sensible quantity is a very reasonable choice for most applications. In both cases, I think that much clarity will be achieved if we remove from scientific parlance the misleading expressions `confidence intervals' and `confidence levels'.
Quantifying probative value
 Boston University Law Review
, 1986
"... D. Davis and W. C. Follette (2002) purport to show that when “the base rate ” for a crime is low, the probative value of “characteristics known to be strongly associated with the crime... will be virtually nil. ” Their analysis rests on the choice of an arbitrary and inapposite measure of the probat ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
D. Davis and W. C. Follette (2002) purport to show that when “the base rate ” for a crime is low, the probative value of “characteristics known to be strongly associated with the crime... will be virtually nil. ” Their analysis rests on the choice of an arbitrary and inapposite measure of the probative value of evidence. When a more suitable metric is used (e.g., a likelihood ratio), it becomes clear that evidence they would dismiss as devoid of probative value is relevant and diagnostic. KEY WORDS: A man and a woman were found in a ditch by their snowmobile. The woman was lying face down in the water. The man was sitting face up, but apparently not breathing. CPR was applied to both. Only the man survived. The state charged him with murder. The prosecution’s theory was that he took the opportunity to kill his wife to recover as the beneficiary of a large insurance policy he had purchased that year. To support this charge, the state introduced evidence that the defendant had repeatedly been unfaithful to his wife (Davis & Follette, 2002; henceforth identified as “DF, 2002”). The defendant hired two psychologists, Deborah Davis and William Follette, to
A Survey of Logic Formalisms to Support Mishap Analysis
"... Mishap investigations provide important information about adverse events and near miss incidents. They are intended to help avoid any recurrence of previous failures. Over time, they can also yield statistical information about incident frequencies that helps to detect patterns of failure and can va ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Mishap investigations provide important information about adverse events and near miss incidents. They are intended to help avoid any recurrence of previous failures. Over time, they can also yield statistical information about incident frequencies that helps to detect patterns of failure and can validate risk assessments. However, the increasing complexity of many safety critical systems is posing new challenges for mishap analysis. Similarly, the recognition that many failures have complex, systemic causes has helped to widen the scope of many mishap investigations. These two factors have combined to pose new challenges for the analysis of adverse events. A new generation of formal and semiformal techniques have been proposed to help investigators address these problems. We introduce the term `mishap logics' to collectively describe these notation that might be applied to support the analysis of mishaps. The proponents of these notations have argued that they can be used to formally prove that certain events created the necessary and sufficient causes for a mishap to occur. These proofs can be used to reduce the bias that is often perceived to effect the interpretation of adverse events. Others have argued that one cannot use logic formalisms to prove causes in the same way that one might prove propositions or theorems. Such mechanisms cannot accurately capture the wealth of inductive, deductive and statistical forms of inference that investigators must use in their analysis of adverse events. This paper provides an overview of these mishap logics. It also identifies several additional classes of logic that might also be used to support mishap analysis.
2007. Probit and logit models: Differences in a multivariate realm. Available at: http://home.gwu.edu/~soyer/mv1h.pdf
"... Summary. Current opinion regarding the selection of link function in binary response models is that the probit and logit links give essentially similar results. This seems to be true for univariate binary response models; however, for multivariate binary response models such advice is misleading. We ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Summary. Current opinion regarding the selection of link function in binary response models is that the probit and logit links give essentially similar results. This seems to be true for univariate binary response models; however, for multivariate binary response models such advice is misleading. We address a gap in the literature by empirically examining the relationship between link function selection and model fit in two classes of multivariate binary response models. We find clear evidence that model fit can be improved by the selection of the appropriate link even in small data sets. In multivariate link function models, the logit link provides better fit in the presence of extreme independent variable levels. Conversely, model fit in random effects models with moderate size data sets is improved generally by selecting the probit link.
Bayesian Model Search and Multilevel Inference for SNP Association Studies
 Annals of Applied Statistics
, 2010
"... Technological advances in genotyping have given rise to hypothesisbased association studies of increasing scope. As a result, the scientific hypotheses addressed by these studies have become more complex and more difficult to address using existing analytic methodologies. Obstacles to analysis inclu ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Technological advances in genotyping have given rise to hypothesisbased association studies of increasing scope. As a result, the scientific hypotheses addressed by these studies have become more complex and more difficult to address using existing analytic methodologies. Obstacles to analysis include inference in the face of multiple comparisons, complications arising from correlations among the SNPs (single nucleotide polymorphisms), choice of their genetic parametrization and missing data. In this paper we present an efficient Bayesian model search strategy that searches over the space of genetic markers and their genetic parametrization. The resulting method for Multilevel Inference of SNP Associations, MISA, allows computation of multilevel posterior probabilities and Bayes factors at the global, gene and SNP level, with the prior distribution on SNP inclusion in the model providing an intrinsic multiplicity correction. We use simulated data sets to characterize MISA’s statistical power, and show that MISA has higher power to detect association than standard procedures. Using data from the North Carolina Ovarian Cancer Study (NCOCS), MISA identifies variants that were not identified by standard methods and have been externally “validated ” in independent studies. We examine sensitivity of the NCOCS results to prior choice and method for imputing missing data. MISA is available in an R package on CRAN. 1. Introduction. Recent
Calibrating Bayes factor under prior predictive distributions
 Statistica Sinica
, 2005
"... Abstract: The Bayes factor is a popular criterion in Bayesian model selection. Due to the lack of symmetry of the prior predictive distribution of Bayes factor across models, the scale of evidence in favor of one model against another constructed based solely on the observed value of the Bayes facto ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract: The Bayes factor is a popular criterion in Bayesian model selection. Due to the lack of symmetry of the prior predictive distribution of Bayes factor across models, the scale of evidence in favor of one model against another constructed based solely on the observed value of the Bayes factor is thus inappropriate. To overcome this problem, a novel calibrating value of the Bayes factor based on the prior predictive distributions and the decision rule based on this calibrating value for selecting the model are proposed. We further show that the proposed decision rule based on the calibration distribution is equivalent to the surprisebased decision. That is, we choose the model for which the observed Bayes factor is less surprising. Moreover, we demonstrate that the decision rule based on the calibrating value is closely related to the classical rejection region for a standard hypothesis testing problem. An ecient Monte Carlo method is proposed for computing the calibrating value. In addition, we carefully examine the robustness of the decision rule based on the calibration distribution to the choice of imprecise priors under both nested and nonnested models. A data set is used to further illustrate the proposed methodology and several important extensions are also discussed. Key words and phrases: Calibrating value, critical value, hypothesis testing, imprecise prior, L measure, model selection, Monte Carlo, posterior model probability, pseudoBayes factor, Pvalue. 1.
Inequalities for Bayes Factors and Relative Belief Ratios
, 2011
"... We discuss the definition of a Bayes factor, the SavageDickey result, and develop some inequalities relevant to Bayesian inferences. We consider the implications of these inequalities for the Bayes factor approach to hypothesis assessment. An approach to hypothesis assessment based on the computati ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We discuss the definition of a Bayes factor, the SavageDickey result, and develop some inequalities relevant to Bayesian inferences. We consider the implications of these inequalities for the Bayes factor approach to hypothesis assessment. An approach to hypothesis assessment based on the computation of a Bayes factor, a measure of reliability of the Bayes factor, and the point where the Bayes factor is maximized is recommended. This can be seen to deal with many of the issues and controversies associated with hypothesis assessment. It is noted that an inconsistency in prior assignments can arise when priors are placed on hypotheses that do not arise from a parameter of interest. It is recommended that this inconsistency be avoided by choosing a distance measure from the hypothesis as the parameter of interest. An application is made to assessing the goodness of fit for a logistic regression model and it is shown that this leads to resolving some difficulties associated with assigning priors for this model.