Results 1  10
of
98
Bayes Factors
, 1995
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 981 (70 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology and psychology.
Bayesian Model Averaging for Linear Regression Models
 Journal of the American Statistical Association
, 1997
"... We consider the problem of accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. A Bayesian solution to this problem in ..."
Abstract

Cited by 184 (13 self)
 Add to MetaCart
We consider the problem of accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. A Bayesian solution to this problem involves averaging over all possible models (i.e., combinations of predictors) when making inferences about quantities of
Bayes factors and model uncertainty
 DEPARTMENT OF STATISTICS, UNIVERSITY OFWASHINGTON
, 1993
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 89 (6 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of Pvalues, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications. The points we emphasize are: from Jeffreys's Bayesian point of view, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory; Bayes factors offer a way of evaluating evidence in favor ofa null hypothesis; Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis; Bayes factors are very general, and do not require alternative models to be nested; several techniques are available for computing Bayes factors, including asymptotic approximations which are easy to compute using the output from standard packages that maximize likelihoods; in "nonstandard " statistical models that do not satisfy common regularity conditions, it can be technically simpler to calculate Bayes factors than to derive nonBayesian significance
The practical implementation of Bayesian model selection
 Institute of Mathematical Statistics
, 2001
"... In principle, the Bayesian approach to model selection is straightforward. Prior probability distributions are used to describe the uncertainty surrounding all unknowns. After observing the data, the posterior distribution provides a coherent post data summary of the remaining uncertainty which is r ..."
Abstract

Cited by 85 (3 self)
 Add to MetaCart
In principle, the Bayesian approach to model selection is straightforward. Prior probability distributions are used to describe the uncertainty surrounding all unknowns. After observing the data, the posterior distribution provides a coherent post data summary of the remaining uncertainty which is relevant for model selection. However, the practical implementation of this approach often requires carefully tailored priors and novel posterior calculation methods. In this article, we illustrate some of the fundamental practical issues that arise for two different model selection problems: the variable selection problem for the linear model and the CART model selection problem.
Optimality in human motor performance: ideal control of rapid aimed movements
 Psychological Review
, 1988
"... A stochastic optimizedsubmovement model is proposed for Pitts ' law, the classic logarithmic tradeoff between the duration and spatial precision of rapid aimed movements. According to the model, an aimed movement toward a specified target region involves a primary submovement and an optional second ..."
Abstract

Cited by 84 (2 self)
 Add to MetaCart
A stochastic optimizedsubmovement model is proposed for Pitts ' law, the classic logarithmic tradeoff between the duration and spatial precision of rapid aimed movements. According to the model, an aimed movement toward a specified target region involves a primary submovement and an optional secondary corrective submovement. The submovements are assumed to be programmed such that they minimize average total movement time while maintaining a high frequency of target hits. The programming process achieves this minimization by optimally adjusting the average magnitudes and durations of noisy neuromotor force pulses used to generate the submovements. Numerous results from the literature on human motor performance may be explained in these terms. Two new experiments on rapid wrist rotations yield additional support for the stochastic optimizedsubmovement model. Experiment 1 revealed that the mean durations of primary submovements and of secondary submovements, not just average total movement times, conform to a squareroot approximation of Pitts ' law derived from the model. Also, the spatial endpoints of primary submovements have standard deviations that increase linearly with average primarysubmovement velocity, and the average primarysubmovement velocity influences the relative frequencies of secondary submovements, as predicted by the model. During Experiment 2, these results were replicated and
DomainSpecific Reasoning: Social Contracts, Cheating, and Perspective Change
, 1992
"... What counts as human rationality: reasoning processes that embody contentindependent formal theories, such as propositional logic, or reasoning processes that are well designed for solving important adaptive problems? Most theories of human reasoning have been based on contentindependent formal r ..."
Abstract

Cited by 66 (2 self)
 Add to MetaCart
What counts as human rationality: reasoning processes that embody contentindependent formal theories, such as propositional logic, or reasoning processes that are well designed for solving important adaptive problems? Most theories of human reasoning have been based on contentindependent formal rationality, whereas adaptive reasoning, ecological or evolutionary, has been little explored. We elaborate and test an evolutionary approach, Cosmides’ (1989) social contract theory, using the Wason selection task. In the first part, we disentangle the theoretical concept of a “social contract” from that of a “cheaterdetection algorithm.” We demonstrate that the fact that a rule is perceived as a social contract—or a conditional permission or obligation, as Cheng and Holyoak (1985) proposed—is not sufficient to elicit Cosmides’ striking results, which we replicated. The crucial issue is not semantic (the meaning of the rule), but pragmatic: whether a person is cued into the perspective of a party who can be cheated. In the second part, we distinguish between social contracts with bilateral and unilateral cheating options. Perspective change in contracts with bilateral cheating options turns P & notQ responses into notP & Q responses. The results strongly support social contract theory, contradict availability theory, and cannot be accounted for by pragmatic reasoning schema theory, which lacks the pragmatic concepts of perspectives and cheating detection.
Bayesian model averaging
 STAT.SCI
, 1999
"... Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to overcon dent inferences and decisions tha ..."
Abstract

Cited by 42 (0 self)
 Add to MetaCart
Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to overcon dent inferences and decisions that are more risky than one thinks they are. Bayesian model averaging (BMA) provides a coherent mechanism for accounting for this model uncertainty. Several methods for implementing BMA haverecently emerged. We discuss these methods and present anumber of examples. In these examples, BMA provides improved outofsample predictive performance. We also provide a catalogue of
From tools to theories: A heuristic of discovery in cognitive psychology
 Psychological Review
, 1991
"... The study of scientific discovery—where do new ideas come from?—has long been denigrated by philosophers as irrelevant to analyzing the growth of scientific knowledge. In particular, little is known about how cognitive theories are discovered, and neither the classical accounts of discovery as eithe ..."
Abstract

Cited by 38 (11 self)
 Add to MetaCart
The study of scientific discovery—where do new ideas come from?—has long been denigrated by philosophers as irrelevant to analyzing the growth of scientific knowledge. In particular, little is known about how cognitive theories are discovered, and neither the classical accounts of discovery as either probabilistic induction (e.g., Reichenbach, 1938) or lucky guesses (e.g., Popper, 1959), nor the stock anecdotes about sudden “eureka ” moments deepen the insight into discovery. A heuristics approach is taken in this review, where heuristics are understood as strategies of discovery less general than a supposed unique logic of discovery but more general than lucky guesses. This article deals with how scientists’ tools shape theories of mind, in particular with how methods of statistical inference have turned into metaphors of mind. The toolstotheories heuristic explains the emergence of a broad range of cognitive theories, from the cognitive revolution of the 1960s up to the present, and it can be used to detect both limitations and new lines of development in current cognitive theories that investigate the mind as an “intuitive statistician.” Scientific inquiry can be viewed as “an ocean, continuous everywhere and without a break or division ” (Leibniz, 1690/1951, p. 73). Hans Reichenbach (1938) nonetheless divided this ocean into two great seas, the context of discovery and the context of justification. Philosophers, logicians,
Consequences of prejudice against the null hypothesis
 Psychological Bulletin
, 1975
"... The consequences of prejudice against accepting the null hypothesis were examined through (a) a mathematical model intended to stimulate the researchpublication process and (b) case studies of apparent erroneous rejections of the null hypothesis in published psychological research. The input param ..."
Abstract

Cited by 36 (8 self)
 Add to MetaCart
The consequences of prejudice against accepting the null hypothesis were examined through (a) a mathematical model intended to stimulate the researchpublication process and (b) case studies of apparent erroneous rejections of the null hypothesis in published psychological research. The input parameters for the model characterize investigators ' probabilities of selecting a problem for which the null hypothesis is true, of reporting, following up on, or abandoning research when data do or do not reject the null hypothesis, and they characterize editors ' probabilities of publishing manuscripts concluding in favor of or against the null hypothesis. With estimates of the input parameters based on a questionnaire survey of a sample of social psychologists, the model output indicates a dysfunctional researchpublication system. Particularly, the model indicates that there may be relatively few publications on problems for which the null hypothesis is (at least to a reasonable approximation) true, and of these, a high proportion will erroneously reject the null hypothesis. The case studies provide additional support for this conclusion. Accordingly, it is
Models of the Effects of Prior Knowledge on Category Learning
 Journal of Experimental Psychology: Learning, Memory, and Cognition
, 1994
"... this article should be addressed to Evan Heir, Department of Psychology, Northwestern University, 2029 Sheridan Road, Evanston, Illinois 60208. Electronic mail may be sent to heit@nwu.edu ..."
Abstract

Cited by 32 (7 self)
 Add to MetaCart
this article should be addressed to Evan Heir, Department of Psychology, Northwestern University, 2029 Sheridan Road, Evanston, Illinois 60208. Electronic mail may be sent to heit@nwu.edu