Results 1  10
of
23
MCMC Methods for Computing Bayes Factors: A Comparative Review
 Journal of the American Statistical Association
, 2000
"... this paper we review several of these methods, and subsequently compare them in the context of two examples, the first a simple regression example, and the second a much more challenging hierarchical longitudinal model of the kind often encountered in biostatistical practice. We find that the joint ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
this paper we review several of these methods, and subsequently compare them in the context of two examples, the first a simple regression example, and the second a much more challenging hierarchical longitudinal model of the kind often encountered in biostatistical practice. We find that the joint modelparameter space search methods perform adequately but can be difficult to program and tune, while the marginal likelihood methods are often less troublesome and require less in the way of additional coding. Our results suggest that the latter methods may be most appropriate for practitioners working in many standard model choice settings, while the former remain important for comparing large numbers of models, or models whose parameters cannot be easily updated in relatively few blocks. We caution however that all of the methods we compare require significant human and computer effort, suggesting that less formal Bayesian model choice methods may offer a more realistic alternative in many cases.
Deviance Information Criterion for Comparing Stochastic Volatility Models
 Journal of Business and Economic Statistics
, 2002
"... Bayesian methods have been efficient in estimating parameters of stochastic volatility models for analyzing financial time series. Recent advances made it possible to fit stochastic volatility models of increasing complexity, including covariates, leverage effects, jump components and heavytailed d ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
Bayesian methods have been efficient in estimating parameters of stochastic volatility models for analyzing financial time series. Recent advances made it possible to fit stochastic volatility models of increasing complexity, including covariates, leverage effects, jump components and heavytailed distributions. However, a formal model comparison via Bayes factors remains difficult. The main objective of this paper is to demonstrate that model selection is more easily performed using the deviance information criterion (DIC). It combines a Bayesian measureoffit with a measure of model complexity. We illustrate the performance of DIC in discriminating between various different stochastic volatility models using simulated data and daily returns data on the S&P100 index.
The Strength of Statistical Evidence for Composite Hypotheses: Inference to the Best Explanation
, 2010
"... A general function to quantify the weight of evidence in a sample of data for one hypothesis over another is derived from the law of likelihood and from a statistical formalization of inference to the best explanation. For a fixed parameter of interest, the resulting weight of evidence that favors o ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
A general function to quantify the weight of evidence in a sample of data for one hypothesis over another is derived from the law of likelihood and from a statistical formalization of inference to the best explanation. For a fixed parameter of interest, the resulting weight of evidence that favors one composite hypothesis over another is the likelihood ratio using the parameter value consistent with each hypothesis that maximizes the likelihood function over the parameter of interest. Since the weight of evidence is generally only known up to a nuisance parameter, it is approximated by replacing the likelihood function with a reduced likelihood function on the interest parameter space. Unlike the Bayes factor and unlike the pvalue under interpretations that extend its scope, the weight of evidence is coherent in the sense that it cannot support a hypothesis over any hypothesis that it entails. Further, when comparing the hypothesis that the parameter lies outside a nontrivial interval to the hypothesis that it lies within the interval, the proposed method of weighing evidence almost always asymptotically favors the correct hypothesis
Confidence Limits: What Is The Problem? Is There The Solution?
, 2000
"... This contribution to the debate on confidence limits focuses mostly on the case of measurements with `open likelihood', in the sense that it is defined in the text. I will show that, though a priorfree assessment of confidence is, in general, not possible, still a search result can be reported in a ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
This contribution to the debate on confidence limits focuses mostly on the case of measurements with `open likelihood', in the sense that it is defined in the text. I will show that, though a priorfree assessment of confidence is, in general, not possible, still a search result can be reported in a mostly unbiased and efficient way, which satisfies some desiderata which I believe are shared by the people interested in the subject. The simpler case of `closed likelihood' will also be treated, and I will discuss why a uniform prior on a sensible quantity is a very reasonable choice for most applications. In both cases, I think that much clarity will be achieved if we remove from scientific parlance the misleading expressions `confidence intervals' and `confidence levels'.
A Survey of Logic Formalisms to Support Mishap Analysis
"... Mishap investigations provide important information about adverse events and near miss incidents. They are intended to help avoid any recurrence of previous failures. Over time, they can also yield statistical information about incident frequencies that helps to detect patterns of failure and can va ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Mishap investigations provide important information about adverse events and near miss incidents. They are intended to help avoid any recurrence of previous failures. Over time, they can also yield statistical information about incident frequencies that helps to detect patterns of failure and can validate risk assessments. However, the increasing complexity of many safety critical systems is posing new challenges for mishap analysis. Similarly, the recognition that many failures have complex, systemic causes has helped to widen the scope of many mishap investigations. These two factors have combined to pose new challenges for the analysis of adverse events. A new generation of formal and semiformal techniques have been proposed to help investigators address these problems. We introduce the term `mishap logics' to collectively describe these notation that might be applied to support the analysis of mishaps. The proponents of these notations have argued that they can be used to formally prove that certain events created the necessary and sufficient causes for a mishap to occur. These proofs can be used to reduce the bias that is often perceived to effect the interpretation of adverse events. Others have argued that one cannot use logic formalisms to prove causes in the same way that one might prove propositions or theorems. Such mechanisms cannot accurately capture the wealth of inductive, deductive and statistical forms of inference that investigators must use in their analysis of adverse events. This paper provides an overview of these mishap logics. It also identifies several additional classes of logic that might also be used to support mishap analysis.
Quantifying probative value
 Boston University Law Review
, 1986
"... D. Davis and W. C. Follette (2002) purport to show that when “the base rate ” for a crime is low, the probative value of “characteristics known to be strongly associated with the crime... will be virtually nil. ” Their analysis rests on the choice of an arbitrary and inapposite measure of the probat ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
D. Davis and W. C. Follette (2002) purport to show that when “the base rate ” for a crime is low, the probative value of “characteristics known to be strongly associated with the crime... will be virtually nil. ” Their analysis rests on the choice of an arbitrary and inapposite measure of the probative value of evidence. When a more suitable metric is used (e.g., a likelihood ratio), it becomes clear that evidence they would dismiss as devoid of probative value is relevant and diagnostic. KEY WORDS: A man and a woman were found in a ditch by their snowmobile. The woman was lying face down in the water. The man was sitting face up, but apparently not breathing. CPR was applied to both. Only the man survived. The state charged him with murder. The prosecution’s theory was that he took the opportunity to kill his wife to recover as the beneficiary of a large insurance policy he had purchased that year. To support this charge, the state introduced evidence that the defendant had repeatedly been unfaithful to his wife (Davis & Follette, 2002; henceforth identified as “DF, 2002”). The defendant hired two psychologists, Deborah Davis and William Follette, to
The Effect of Priors on Approximate Bayes Factors from MCMC Output.” Unpublished manuscript
"... The MCMC approach to calculating approximate Bayes factors is considered. The calculation, consisting of a loglikelihood, a prior, and a posterior, presents an excellent opportunity to observe directly the effects of priors on Bayes factors. Three empirical examples demonstrate that Bayes factors a ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The MCMC approach to calculating approximate Bayes factors is considered. The calculation, consisting of a loglikelihood, a prior, and a posterior, presents an excellent opportunity to observe directly the effects of priors on Bayes factors. Three empirical examples demonstrate that Bayes factors are sensitive to a combination of the prior variance and the difference in the number of parameters between the rival models. a I thank Susan Murphy for helpful discussions and Paul Huth, Christopher Gelpi, D.
Law and Human Behavior [lahu] pp1018lahu474731 November 13, 2003 22:30 Style file version June 4th, 2002
 Boston University Law Review
, 2003
"... this paper, we demonstrate that the measure of probative value in DF (2002) is indeed flawed. It defines probative value (evidentiary support) in terms of posterior probabilities that measure something quite differentnamely, the sufficiency of that support. Despite an attempt to examine "sufficie ..."
Abstract
 Add to MetaCart
this paper, we demonstrate that the measure of probative value in DF (2002) is indeed flawed. It defines probative value (evidentiary support) in terms of posterior probabilities that measure something quite differentnamely, the sufficiency of that support. Despite an attempt to examine "sufficiency" separately, DF (2002) conflate the two concepts, falling into the trap that caused much confusion in the law of evidence in previous centuries. In what follows, we compare and contrast DF's measure of probative value with a more conventional measure (DF, 2002). In the process, we apply the competing measures to specific scenarios to show that the conventional measure captures the meaning of probative value, while the DavisFollette measure leads to undesirable, if not absurd, results
1.1 Disagreements and Disagreements
"... Abstract. Ronald Fisher advocated testing using pvalues, Harold Jeffreys proposed use of objective posterior probabilities of hypotheses and Jerzy Neyman recommended testing with fixed error probabilities. Each was quite critical of the other approaches. Most troubling for statistics and science is ..."
Abstract
 Add to MetaCart
Abstract. Ronald Fisher advocated testing using pvalues, Harold Jeffreys proposed use of objective posterior probabilities of hypotheses and Jerzy Neyman recommended testing with fixed error probabilities. Each was quite critical of the other approaches. Most troubling for statistics and science is that the three approaches can lead to quite different practical conclusions. This article focuses on discussion of the conditional frequentist approach to testing, which is argued to provide the basis for a methodological unification of the approaches of Fisher, Jeffreys and Neyman. The idea is to follow Fisher in using pvalues to define the “strength of evidence ” in data and to follow his approach of conditioning on strength of evidence; then follow Neyman by computing Type I and Type II error probabilities, but do so conditional on the strength of evidence in the data. The resulting conditional frequentist error probabilities equal the objective posterior probabilities of the hypotheses advocated by Jeffreys. Key words and phrases: pvalues, posterior probabilities of hypotheses, Type I and Type II error probabilities, conditional testing.