Results 1  10
of
47
Deviance Information Criterion for Comparing Stochastic Volatility Models
 Journal of Business and Economic Statistics
, 2002
"... Bayesian methods have been efficient in estimating parameters of stochastic volatility models for analyzing financial time series. Recent advances made it possible to fit stochastic volatility models of increasing complexity, including covariates, leverage effects, jump components and heavytailed d ..."
Abstract

Cited by 51 (11 self)
 Add to MetaCart
(Show Context)
Bayesian methods have been efficient in estimating parameters of stochastic volatility models for analyzing financial time series. Recent advances made it possible to fit stochastic volatility models of increasing complexity, including covariates, leverage effects, jump components and heavytailed distributions. However, a formal model comparison via Bayes factors remains difficult. The main objective of this paper is to demonstrate that model selection is more easily performed using the deviance information criterion (DIC). It combines a Bayesian measureoffit with a measure of model complexity. We illustrate the performance of DIC in discriminating between various different stochastic volatility models using simulated data and daily returns data on the S&P100 index.
MCMC Methods for Computing Bayes Factors: A Comparative Review
 Journal of the American Statistical Association
, 2000
"... this paper we review several of these methods, and subsequently compare them in the context of two examples, the first a simple regression example, and the second a much more challenging hierarchical longitudinal model of the kind often encountered in biostatistical practice. We find that the joint ..."
Abstract

Cited by 38 (1 self)
 Add to MetaCart
this paper we review several of these methods, and subsequently compare them in the context of two examples, the first a simple regression example, and the second a much more challenging hierarchical longitudinal model of the kind often encountered in biostatistical practice. We find that the joint modelparameter space search methods perform adequately but can be difficult to program and tune, while the marginal likelihood methods are often less troublesome and require less in the way of additional coding. Our results suggest that the latter methods may be most appropriate for practitioners working in many standard model choice settings, while the former remain important for comparing large numbers of models, or models whose parameters cannot be easily updated in relatively few blocks. We caution however that all of the methods we compare require significant human and computer effort, suggesting that less formal Bayesian model choice methods may offer a more realistic alternative in many cases.
The Strength of Statistical Evidence for Composite Hypotheses: Inference to the Best Explanation
, 2010
"... A general function to quantify the weight of evidence in a sample of data for one hypothesis over another is derived from the law of likelihood and from a statistical formalization of inference to the best explanation. For a fixed parameter of interest, the resulting weight of evidence that favors o ..."
Abstract

Cited by 19 (12 self)
 Add to MetaCart
(Show Context)
A general function to quantify the weight of evidence in a sample of data for one hypothesis over another is derived from the law of likelihood and from a statistical formalization of inference to the best explanation. For a fixed parameter of interest, the resulting weight of evidence that favors one composite hypothesis over another is the likelihood ratio using the parameter value consistent with each hypothesis that maximizes the likelihood function over the parameter of interest. Since the weight of evidence is generally only known up to a nuisance parameter, it is approximated by replacing the likelihood function with a reduced likelihood function on the interest parameter space. Unlike the Bayes factor and unlike the pvalue under interpretations that extend its scope, the weight of evidence is coherent in the sense that it cannot support a hypothesis over any hypothesis that it entails. Further, when comparing the hypothesis that the parameter lies outside a nontrivial interval to the hypothesis that it lies within the interval, the proposed method of weighing evidence almost always asymptotically favors the correct hypothesis
Bayesian Model Search and Multilevel Inference for SNP Association Studies
 Annals of Applied Statistics
, 2010
"... Technological advances in genotyping have given rise to hypothesisbased association studies of increasing scope. As a result, the scientific hypotheses addressed by these studies have become more complex and more difficult to address using existing analytic methodologies. Obstacles to analysis inclu ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Technological advances in genotyping have given rise to hypothesisbased association studies of increasing scope. As a result, the scientific hypotheses addressed by these studies have become more complex and more difficult to address using existing analytic methodologies. Obstacles to analysis include inference in the face of multiple comparisons, complications arising from correlations among the SNPs (single nucleotide polymorphisms), choice of their genetic parametrization and missing data. In this paper we present an efficient Bayesian model search strategy that searches over the space of genetic markers and their genetic parametrization. The resulting method for Multilevel Inference of SNP Associations, MISA, allows computation of multilevel posterior probabilities and Bayes factors at the global, gene and SNP level, with the prior distribution on SNP inclusion in the model providing an intrinsic multiplicity correction. We use simulated data sets to characterize MISA’s statistical power, and show that MISA has higher power to detect association than standard procedures. Using data from the North Carolina Ovarian Cancer Study (NCOCS), MISA identifies variants that were not identified by standard methods and have been externally “validated ” in independent studies. We examine sensitivity of the NCOCS results to prior choice and method for imputing missing data. MISA is available in an R package on CRAN. 1. Introduction. Recent
Confidence Limits: What Is The Problem? Is There The Solution?
, 2000
"... This contribution to the debate on confidence limits focuses mostly on the case of measurements with `open likelihood', in the sense that it is defined in the text. I will show that, though a priorfree assessment of confidence is, in general, not possible, still a search result can be reported ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
This contribution to the debate on confidence limits focuses mostly on the case of measurements with `open likelihood', in the sense that it is defined in the text. I will show that, though a priorfree assessment of confidence is, in general, not possible, still a search result can be reported in a mostly unbiased and efficient way, which satisfies some desiderata which I believe are shared by the people interested in the subject. The simpler case of `closed likelihood' will also be treated, and I will discuss why a uniform prior on a sensible quantity is a very reasonable choice for most applications. In both cases, I think that much clarity will be achieved if we remove from scientific parlance the misleading expressions `confidence intervals' and `confidence levels'.
Quantifying probative value
 Boston University Law Review
, 1986
"... D. Davis and W. C. Follette (2002) purport to show that when “the base rate ” for a crime is low, the probative value of “characteristics known to be strongly associated with the crime... will be virtually nil. ” Their analysis rests on the choice of an arbitrary and inapposite measure of the probat ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
D. Davis and W. C. Follette (2002) purport to show that when “the base rate ” for a crime is low, the probative value of “characteristics known to be strongly associated with the crime... will be virtually nil. ” Their analysis rests on the choice of an arbitrary and inapposite measure of the probative value of evidence. When a more suitable metric is used (e.g., a likelihood ratio), it becomes clear that evidence they would dismiss as devoid of probative value is relevant and diagnostic. KEY WORDS: A man and a woman were found in a ditch by their snowmobile. The woman was lying face down in the water. The man was sitting face up, but apparently not breathing. CPR was applied to both. Only the man survived. The state charged him with murder. The prosecution’s theory was that he took the opportunity to kill his wife to recover as the beneficiary of a large insurance policy he had purchased that year. To support this charge, the state introduced evidence that the defendant had repeatedly been unfaithful to his wife (Davis & Follette, 2002; henceforth identified as “DF, 2002”). The defendant hired two psychologists, Deborah Davis and William Follette, to
2007. Probit and logit models: Differences in a multivariate realm. Available at: http://home.gwu.edu/~soyer/mv1h.pdf
"... Summary. Current opinion regarding the selection of link function in binary response models is that the probit and logit links give essentially similar results. This seems to be true for univariate binary response models; however, for multivariate binary response models such advice is misleading. We ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Summary. Current opinion regarding the selection of link function in binary response models is that the probit and logit links give essentially similar results. This seems to be true for univariate binary response models; however, for multivariate binary response models such advice is misleading. We address a gap in the literature by empirically examining the relationship between link function selection and model fit in two classes of multivariate binary response models. We find clear evidence that model fit can be improved by the selection of the appropriate link even in small data sets. In multivariate link function models, the logit link provides better fit in the presence of extreme independent variable levels. Conversely, model fit in random effects models with moderate size data sets is improved generally by selecting the probit link.
Computing the Bayes Factor from a Markov Chain Monte Carlo Simulation of the Posterior Distribution, Bayesian Anal
"... ar ..."
(Show Context)
A Survey of Logic Formalisms to Support Mishap Analysis
"... Mishap investigations provide important information about adverse events and near miss incidents. They are intended to help avoid any recurrence of previous failures. Over time, they can also yield statistical information about incident frequencies that helps to detect patterns of failure and can va ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Mishap investigations provide important information about adverse events and near miss incidents. They are intended to help avoid any recurrence of previous failures. Over time, they can also yield statistical information about incident frequencies that helps to detect patterns of failure and can validate risk assessments. However, the increasing complexity of many safety critical systems is posing new challenges for mishap analysis. Similarly, the recognition that many failures have complex, systemic causes has helped to widen the scope of many mishap investigations. These two factors have combined to pose new challenges for the analysis of adverse events. A new generation of formal and semiformal techniques have been proposed to help investigators address these problems. We introduce the term `mishap logics' to collectively describe these notation that might be applied to support the analysis of mishaps. The proponents of these notations have argued that they can be used to formally prove that certain events created the necessary and sufficient causes for a mishap to occur. These proofs can be used to reduce the bias that is often perceived to effect the interpretation of adverse events. Others have argued that one cannot use logic formalisms to prove causes in the same way that one might prove propositions or theorems. Such mechanisms cannot accurately capture the wealth of inductive, deductive and statistical forms of inference that investigators must use in their analysis of adverse events. This paper provides an overview of these mishap logics. It also identifies several additional classes of logic that might also be used to support mishap analysis.
Accommodating heterogenous rates of evolution in molecular divergence dating methods: an example using intercontinental dispersal of Plestiodon (Eumeces) lizards
, 2011
"... Abstract.—Identifying and dating historical biological events is a fundamental goal of evolutionary biology, and recent analytical advances permit the modeling of factors known to affect both the accuracy and the precision of molecular date estimates. As the use of multilocus data sets becomes incre ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract.—Identifying and dating historical biological events is a fundamental goal of evolutionary biology, and recent analytical advances permit the modeling of factors known to affect both the accuracy and the precision of molecular date estimates. As the use of multilocus data sets becomes increasingly routine, it becomes more important to evaluate the potentially confounding effects of rate heterogeneity both within (e.g., codon positions) and among loci when estimating divergence times. Here, using Plestiodon lizards as a test case, we examine the effects of accommodating rate heterogeneity among data partitions on divergence time estimation. Plestiodon inhabits both East Asia and North America, yet both the