Results 1  10
of
44
How to improve Bayesian reasoning without instruction: Frequency formats
 Psychological Review
, 1995
"... Is the mind, by design, predisposed against performing Bayesian inference? Previous research on base rate neglect suggests that the mind lacks the appropriate cognitive algorithms. However, any claim against the existence of an algorithm, Bayesian or otherwise, is impossible to evaluate unless one s ..."
Abstract

Cited by 220 (21 self)
 Add to MetaCart
Is the mind, by design, predisposed against performing Bayesian inference? Previous research on base rate neglect suggests that the mind lacks the appropriate cognitive algorithms. However, any claim against the existence of an algorithm, Bayesian or otherwise, is impossible to evaluate unless one specifies the information format in which it is designed to operate. The authors show that Bayesian algorithms are computationally simpler in frequency formats than in the probability formats used in previous research. Frequency formats correspond to the sequential way information is acquired in natural sampling, from animal foraging to neural networks. By analyzing several thousand solutions to Bayesian problems, the authors found that when information was presented in frequency formats, statistically naive participants derived up to 50 % of all inferences by Bayesian algorithms. NonBayesian algorithms included simple versions of Fisherian and NeymanPearsonian inference. Is the mind, by design, predisposed against performing Bayesian inference? The classical probabilists of the Enlightenment, including Condorcet, Poisson, and Laplace, equated probability theory with the common sense of educated people, who were known then as “hommes éclairés.” Laplace (1814/1951) declared that “the theory of probability is at bottom nothing more than good sense reduced to a calculus which evaluates that which good minds know by a sort of instinct,
On the Reality of Cognitive Illusions
, 1996
"... The study of heuristics and biases in judgment has been criticized in several publications by G. Gigerenzer, who argues that "biases are not biases" and "heuristics are meant to explain what does not exist" (1991, p. 102). This article responds to Gigerenzer's critique and shows that it misrepresent ..."
Abstract

Cited by 94 (1 self)
 Add to MetaCart
The study of heuristics and biases in judgment has been criticized in several publications by G. Gigerenzer, who argues that "biases are not biases" and "heuristics are meant to explain what does not exist" (1991, p. 102). This article responds to Gigerenzer's critique and shows that it misrepresents the authors' theoretical position and ignores critical evidence. Contrary to Gigerenzer's central empirical claim, judgments of frequency—not only subjective probabilities—are susceptible to large and systematic biases. A postscript responds to Gigerenzer's (1996) reply.
Combining probability distributions from dependent information sources
 Management Sci
, 1981
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Statistical Methods for Eliciting Probability Distributions
 Journal of the American Statistical Association
, 2005
"... Elicitation is a key task for subjectivist Bayesians. While skeptics hold that it cannot (or perhaps should not) be done, in practice it brings statisticians closer to their clients and subjectmatterexpert colleagues. This paper reviews the stateoftheart, reflecting the experience of statisticia ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
Elicitation is a key task for subjectivist Bayesians. While skeptics hold that it cannot (or perhaps should not) be done, in practice it brings statisticians closer to their clients and subjectmatterexpert colleagues. This paper reviews the stateoftheart, reflecting the experience of statisticians informed by the fruits of a long line of psychological research into how people represent uncertain information cognitively, and how they respond to questions about that information. In a discussion of the elicitation process, the first issue to address is what it means for an elicitation to be successful, i.e. what criteria should be employed? Our answer is that a successful elicitation faithfully represents the opinion of the person being elicited. It is not necessarily “true ” in some objectivistic sense, and cannot be judged that way. We see elicitation as simply part of the process of statistical modeling. Indeed in a hierarchical model it is ambiguous at which point the likelihood ends and the prior begins. Thus the same kinds of judgment that inform statistical modeling in general also inform elicitation of prior distributions.
Frequency Illusions and Other Fallacies
"... Cosmides and Tooby (1996) increased performance using a frequency rather than probability frame on a problem known to elicit baserate neglect. Analogously, Gigerenzer (1994) claimed that the conjunction fallacy disappears when formulated in terms of frequency rather than the more usual singleevent ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
Cosmides and Tooby (1996) increased performance using a frequency rather than probability frame on a problem known to elicit baserate neglect. Analogously, Gigerenzer (1994) claimed that the conjunction fallacy disappears when formulated in terms of frequency rather than the more usual singleevent probability. These authors conclude that a module or algorithm of mind exists that is able to compute with frequencies but not probabilities. The studies reported here found that baserate neglect could also be reduced using a clearly stated singleevent probability frame and by using a diagram that clarified the critical nestedset relations of the problem; that the frequency advantage could be eliminated in the conjunction fallacy by separating the critical statements so that their nested relation was opaque; and that the large effect of frequency framing on the two problems studied is not stable. Facilitation via frequency is a result of clarifying the probabilistic interpretation of the...
A variance explanation paradox: When a little is a lot
 Psychological Bulletin
, 1985
"... Concerning a single major league at bat, the percentage of variance in batting performance attributable to skill differentials among major league baseball players can be calculated statistically. The statistically appropriate calculation is seriously discrepant with intuitions about the influence of ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
Concerning a single major league at bat, the percentage of variance in batting performance attributable to skill differentials among major league baseball players can be calculated statistically. The statistically appropriate calculation is seriously discrepant with intuitions about the influence of skill in batting performance. This paradoxical discrepancy is discussed in terms of habits of thought about the concept of variance explanation. It is argued that percent variance explanation is a misleading index of the influence of systematic factors in cases where there are processes by which individually tiny influences cumulate to produce meaningful outcomes. It is generally accepted that percentage of variance explained is a good measure of the importance of potential explanatory factors. Correlation coefficients of.30 or less are often poormouthed as accounting for less than 10 % of the variance, a rather feeble performance for the influence of a putatively systematic factor. In analysis of variance contexts, the percentage of variance explanation is embodied in the omegasquared ratio of the systematic variance component to the total of the systematic and chance variance components. It, too, is often small; when it is, this is a source of discouragement for the thoughtful investigator. Psychologists sometimes tend to rely too much on statistical significance tests as the basis for making substantive claims, thereby often disguising low levels of variance explanation. It is usually an effective criticism when one can highlight the explanatory weakness of an investigator's pet variables in percentage terms. Having been trained, like all of us, in the idiom of variance explanation, I have always Willa Dinwoodie Abelson, Fred Sheffield, Allan Wagner, and Rick Wagner provided helpful comments on an earlier draft of this article. I wish also to thank the faculty and graduate students of the Yale University Psychology Department for exposing themselves to potential collective embarrassment by filling out the questionnaire. Requests for reprints should be sent to Robert P.
Evaluating and combining subjective probability estimates
 Journal of Behavioral Decision Making
, 1997
"... This paper concerns the evaluation and combination of subjective probability estimates for categorical events. We argue that the appropriate criterion for evaluating individual and combined estimates depends on the type of uncertainty the decision maker seeks to represent, which in turn depends on h ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
This paper concerns the evaluation and combination of subjective probability estimates for categorical events. We argue that the appropriate criterion for evaluating individual and combined estimates depends on the type of uncertainty the decision maker seeks to represent, which in turn depends on his or her model of the event space. Decision makers require accurate estimates in the presence of aleatory uncertainty about exchangeable events, diagnostic estimates given epistemic uncertainty about unique events, and some combination of the two when the events are not necessarily unique, but the best equivalence class de®nition for exchangeable events is not apparent. Following a brief reveiw of the mathematical and empirical literature on combining judgments, we present an approach to the topic that derives from (1) a weak cognitive model of the individual that assumes subjective estimates are a function of underlying judgment perturbed by random error and (2) a classi®cation of judgment contexts in terms of the underlying information structure. In support of our developments, we present new analyses of two sets of subjective probability estimates, one of exchangeable and the other of unique events. As predicted, mean estimates were more accurate than the individual values in the ®rst case and more diagnostic in
Judgment dissociation theory: An analysis of differences in causal, counterfactual, and covariational reasoning
 Journal of Experimental Psychology: General
, 2003
"... Research suggests that causal judgment is influenced primarily by counterfactual or covariational reasoning. In contrast, the author of this article develops judgment dissociation theory (JDT), which predicts that these types of reasoning differ in function and can lead to divergent judgments. The a ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
Research suggests that causal judgment is influenced primarily by counterfactual or covariational reasoning. In contrast, the author of this article develops judgment dissociation theory (JDT), which predicts that these types of reasoning differ in function and can lead to divergent judgments. The actuality principle proposes that causal selections focus on antecedents that are sufficient to generate the actual outcome. The substitution principle proposes that ad hoc categorization plays a key role in counterfactual and covariational reasoning such that counterfactual selections focus on antecedents that would have been sufficient to prevent the outcome or something like it and covariational selections focus on antecedents that yield the largest increase in the probability of the outcome or something like it. The findings of 4 experiments support JDT but not the competing counterfactual and covariational accounts. If causation is the cement of the universe, as the philosopher David Hume (1740/1938) put it, then it is fair to say that causal knowledge is the cement that binds together each person’s representational universe. Causal reasoning—the process that generates this glue—confers many functional advantages. In virtually every sphere of human interest, our abilities to learn and categorize
The cognitive structure of surprise: looking for basic principles
 International Review of Philosophy
, 2007
"... We develop a conceptual and formal clarification of the notion of surprise as a beliefbased phenomenon by exploring a rich typology. Each kind of surprise is associated with a particular phase of the cognitive processing and involves particular kinds of epistemic representations (representations an ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
We develop a conceptual and formal clarification of the notion of surprise as a beliefbased phenomenon by exploring a rich typology. Each kind of surprise is associated with a particular phase of the cognitive processing and involves particular kinds of epistemic representations (representations and expectations under scrutiny, implicit beliefs, presuppositions). We define two main kinds of surprise: mismatchbased surprise and astonishment. In the central part of the paper we suggest how a formal model of surprise can be integrated with a formal model of belief change. We investigate the role of surprise in triggering the process of belief reconsideration. There are a number of models of surprise developed in psychology of emotion. We provide several comparisons of our approach with those models.
Prior Information and Generalized Questions
, 1996
"... In learning problems available information is usually divided into two categories: examples of function values (or training data) and prior information (e.g. a smoothness constraint). ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
In learning problems available information is usually divided into two categories: examples of function values (or training data) and prior information (e.g. a smoothness constraint).