Results 1  10
of
21
Toward evidencebased medical statistics. 2: The bayes factor
 Annals of Internal Medicine
, 1999
"... Bayesian inference is usually presented as a method for determining how scientific belief should be modified by data. Although Bayesian methodology has been one of the most active areas of statistical development in the past 20 years, medical researchers have been reluctant to embrace what they perc ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
Bayesian inference is usually presented as a method for determining how scientific belief should be modified by data. Although Bayesian methodology has been one of the most active areas of statistical development in the past 20 years, medical researchers have been reluctant to embrace what they perceive as a subjective approach to data analysis. It is little understood that Bayesian methods have a databased core, which can be used as a calculus of evidence. This core is the Bayes factor, which in its simplest form is also called a likelihood ratio. The minimum Bayes factor is objective and can be used in lieu of the P value as a measure of the evidential strength. Unlike P values, Bayes factors have a sound theoretical foundation and an interpretation that allows their use in both inference and decision making. Bayes factors show that P values greatly overstate the evidence against the null hypothesis. Most important, Bayes factors require the addition of background knowledge to be transformed into inferences—probabilities that a given conclusion is right or wrong. They make the distinction clear between experimental evidence and inferential conclusions while providing a framework in which to combine prior with current evidence. This paper is also available at
Nonparametric Methods for Doubly Truncated Data
 Journal of the American Statistical Association
, 1999
"... Truncated data plays an important role in the statistical analysis of astronomical ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Truncated data plays an important role in the statistical analysis of astronomical
A SimulationIntensive Approach for Checking Hierarchical Models
 TEST
, 1998
"... Recent computational advances have made it feasible to fit hierarchical models in a wide range of serious applications. If one entertains a collection of such models for a given data set, the problems of model adequacy and model choice arise. We focus on the former. While model checking usually addr ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Recent computational advances have made it feasible to fit hierarchical models in a wide range of serious applications. If one entertains a collection of such models for a given data set, the problems of model adequacy and model choice arise. We focus on the former. While model checking usually addresses the entire model specification, model failures can occur at each hierarchical stage. Such failures include outliers, mean structure errors, dispersion misspecification, and inappropriate exchangeabilities. We propose another approach which is entirely simulation based. It only requires the model specification and that, for a given data set, one be able to simulate draws from the posterior under the model. By replicating a posterior of interest using data obtained under the model we can "see" the extent of variability in such a posterior. Then, we can compare the posterior obtained under the observed data with this medley of posterior replicates to ascertain whether the former is in agr...
Empirical Bayes and Item Clustering Effects in a Latent Variable Hierarchical Model: A case study from the National Assessment of Educational Progress
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 2002
"... Empirical Bayes regression procedures are commonly used in educational and psychological testing as extensions to latent variable models. The National Assessment of Educational Progress (NAEP) is an important national survey using such procedures. NAEP applies empirical Bayes methods to models from ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Empirical Bayes regression procedures are commonly used in educational and psychological testing as extensions to latent variable models. The National Assessment of Educational Progress (NAEP) is an important national survey using such procedures. NAEP applies empirical Bayes methods to models from item response theory in order to calibrate student responses to questions of varying difficulty. Partially due to the limited computing technology that existed when NAEP was first conceived, NAEP analyses are carried out using a twostage estimation procedure that ignores uncertainty about some model parameters. Furthermore, the item response theory model NAEP uses ignores the effect of item clustering created by the design of a test form. Using Markov chain Monte Carlo, we simultaneously estimate all parameters of an expanded model that considers item clustering in order to investigate the impact of item clustering and ignored uncertainty about model parameters on NAEP's reported outcome me...
Objective Bayesian analysis of contingency tables
, 2002
"... The statistical analysis of contingency tables is typically carried out with a hypothesis test. In the Bayesian paradigm, default priors for hypothesis tests are typically improper, and cannot be used. Although such priors are available, and proper, for testing contingency tables, we show that for t ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
The statistical analysis of contingency tables is typically carried out with a hypothesis test. In the Bayesian paradigm, default priors for hypothesis tests are typically improper, and cannot be used. Although such priors are available, and proper, for testing contingency tables, we show that for testing independence they can be greatly improved on by socalled intrinsic priors. We also argue that because there is no realistic situation that corresponds to the case of conditioning on both margins of a contingency table, the proper analysis of an a × b contingency table should only condition on either the table total or on only one of the margins. The posterior probabilities from the intrinsic priors provide reasonable answers in these cases. Examples using simulated and real data are given.
Predictive Inference, Rare Events And Hierarchical Models
, 1997
"... this paper have implicity assumed a single homogeneous sample. However, they are also applicable in multisample problems, in which the parameters of the model are possibly different from one sample to another. Such problems lead to what are usually called empirical Bayes methods of analysis. In rec ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
this paper have implicity assumed a single homogeneous sample. However, they are also applicable in multisample problems, in which the parameters of the model are possibly different from one sample to another. Such problems lead to what are usually called empirical Bayes methods of analysis. In recent years it has become more common to solve such problems from a fully Bayesian point of view, using a hierarchical model structure to link together the parameters of the different subsamples. This is the point of view taken, for example, in the excellent recent monograph by Carlin and Louis (1996). Despite the very rapid growth of this field, there has been comparatively little study of the frequentist properties of Bayesian procedures in this setting. Berger and Strawderman (1996) established some admissibility results, which have the advantage of not relying on any kind of asymptotics, and which provide guidance on the choice of prior particularly where improper priors are concerned. On the other hand, the class of models to which their results apply is restrictive, and admissibility results do not necessarily help to pick out a prior distribution which has good properties under particular conditions. In contrast, the results of the present paper are asymptotic (letting sample size n !1 while the number of samples remains fixed) but they do allow explicit computations to be made under a veriety of circumstances. In the present section, these ideas are worked out in some detail for the simplest problem in this class: the case of p normal distributions with unknown means and known common variance. In the next section, a more complicated example is considered. Suppose there are p subgroups and the data in the j'th subgroup follow a N(` j ; 1) distribution. Here the vector ...
Empirical Bayes and itemclustering effects in a latent variable hierarchical model: A case study from the National Assessment of Educational Progress
 Journal of the American Statistical Association
, 2002
"... Empirical Bayes regression procedures are often used in educational and psychological testing as extensions to latent variables models. The National Assessment of Educational Progress (NAEP) is an important national survey using such procedures. The NAEP applies empirical Bayes methods to models fro ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Empirical Bayes regression procedures are often used in educational and psychological testing as extensions to latent variables models. The National Assessment of Educational Progress (NAEP) is an important national survey using such procedures. The NAEP applies empirical Bayes methods to models from item response theory to calibrate student responses to questions of varying difficulty. Due partially to the limited computing technology that existed when the NAEP was first conceived, NAEP analyses are carried out using a twostage estimation procedure that ignores uncertainty about some model parameters. Furthermore, the item response theory model that the NAEP uses ignores the effect of item clustering created by the design of a test form. Using Markov chain Monte Carlo, we simultaneously estimate all parameters of an expanded model that considers item clustering to investigate the impact of item clustering and ignoring uncertainty about model parameters on an important outcome measure that the NAEP reports. Ignoring these two effects causes substantial underestimation of standard errors and induces a modest bias in location estimates.
Resolving Goodman’s Paradox: How to Defuse Inductive Skepticism’, Unpublished Manuscript
, 2000
"... Abstract. Subjective Bayesian inference is unsuitable as an ideal for learning strategies to approximate, as the arbitrariness in prior probabilities makes claims to Bayesian learning too easily vulnerable to inductive skepticism. An objective Bayesian approach, which determines priors by maximizing ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. Subjective Bayesian inference is unsuitable as an ideal for learning strategies to approximate, as the arbitrariness in prior probabilities makes claims to Bayesian learning too easily vulnerable to inductive skepticism. An objective Bayesian approach, which determines priors by maximizing information entropy, runs into insurmountable difficulties in conditions where no definite background theory is available. However, this lack of background knowledge makes the maximum entropy argument directly applicable to the process of drawing samples from a population. As a result, evidence can be seen not just as eliminating a number of incompatible hypotheses out of an infinity of possibilities, but as being representative of the true state of affairs. Hence inductive skepticism can be avoided, as demonstrated by a resolution of Goodman’s ‘grue ’ paradox. This leads to a clearer understanding of the vital role abductive processes and tools like simple generalization play in learning. Keywords: Goodman’s paradox, induction, inductive skepticism, statistical inference, Bayesian learning, maximum entropy, machine learning, abduction, generalization
Assessing Robustness of Intrinsic Tests of Independence in Twoway Contingency Tables
"... Abstract: A condition needed for testing nested hypotheses from a Bayesian viewpoint is that the prior for the alternative model concentrates mass around the smaller, or null, model. For testing independence in contingency tables, the intrinsic priors satisfy this requirement. Further, the degree ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract: A condition needed for testing nested hypotheses from a Bayesian viewpoint is that the prior for the alternative model concentrates mass around the smaller, or null, model. For testing independence in contingency tables, the intrinsic priors satisfy this requirement. Further, the degree of concentration of the priors is controlled by a discrete parameter m, the training sample size, which plays an important role in the resulting answer. In this paper we study, for small or moderate sample sizes, robustness of the tests of independence in contingency tables with respect to intrinsic priors with different degrees of concentration around the null. We compare these tests with frequentist tests and the robust Bayes tests of Good and Crook. For large sample sizes robustness is achieved since the intrinsic Bayesian tests are consistent. We also discuss conditioning issues and sampling schemes, and argue that conditioning should be on either one margin or the table total, but not on both margins. Examples using real are simulated data are given.