Results 1  10
of
88
An Analysis of FirstOrder Logics of Probability
 Artificial Intelligence
, 1990
"... : We consider two approaches to giving semantics to firstorder logics of probability. The first approach puts a probability on the domain, and is appropriate for giving semantics to formulas involving statistical information such as "The probability that a randomly chosen bird flies is greater than ..."
Abstract

Cited by 271 (18 self)
 Add to MetaCart
: We consider two approaches to giving semantics to firstorder logics of probability. The first approach puts a probability on the domain, and is appropriate for giving semantics to formulas involving statistical information such as "The probability that a randomly chosen bird flies is greater than .9." The second approach puts a probability on possible worlds, and is appropriate for giving semantics to formulas describing degrees of belief, such as "The probability that Tweety (a particular bird) flies is greater than .9." We show that the two approaches can be easily combined, allowing us to reason in a straightforward way about statistical information and degrees of belief. We then consider axiomatizing these logics. In general, it can be shown that no complete axiomatization is possible. We provide axiom systems that are sound and complete in cases where a complete axiomatization is possible, showing that they do allow us to capture a great deal of interesting reasoning about prob...
Probabilistic Mental Models: A Brunswikian Theory of Confidence
 Psychological Review
, 1991
"... Research on people’s confidence in their general knowledge has to date produced two fairly stable effects, many inconsistent results, and no comprehensive theory. We propose such a comprehensive framework, the theory of probabilistic mental models (PMM theory). The theory (a) explains both the overc ..."
Abstract

Cited by 148 (22 self)
 Add to MetaCart
Research on people’s confidence in their general knowledge has to date produced two fairly stable effects, many inconsistent results, and no comprehensive theory. We propose such a comprehensive framework, the theory of probabilistic mental models (PMM theory). The theory (a) explains both the overconfidence effect (mean confidence is higher than percentage of answers correct) and the hardeasy effect (overconfidence increases with item difficulty) reported in the literature and (b) predicts conditions under which both effects appear, disappear, or invert. In addition, (c) it predicts a new phenomenon, the confidencefrequency effect, a systematic difference between a judgment of confidence in a single event (i.e., that any given answer is correct) and a judgment of the frequency of correct answers in the long run. Two experiments are reported that support PMM theory by confirming these predictions, and several apparent anomalies reported in the literature are explained and integrated into the present framework. Do people think they know more than they really do? In the last 15 years, cognitive psychologists have amassed a large and apparently damning body of experimental evidence on overconfidence in knowledge, evidence that is in turn part of an even larger and more damning literature on socalled cognitive biases. The cognitive bias research claims that people are naturally prone to making mistakes in reasoning and memory, including the mistake of overestimating their knowledge.
Belief Functions: The Disjunctive Rule of Combination and the Generalized Bayesian Theorem
"... We generalize the Bayes ’ theorem within the transferable belief model framework. The Generalized Bayesian Theorem (GBT) allows us to compute the belief over a space Θ givenanobservationx⊆Xwhen one knows only the beliefs over X for every θi ∈ Θ. We also discuss the Disjunctive Rule of Combination ( ..."
Abstract

Cited by 119 (6 self)
 Add to MetaCart
We generalize the Bayes ’ theorem within the transferable belief model framework. The Generalized Bayesian Theorem (GBT) allows us to compute the belief over a space Θ givenanobservationx⊆Xwhen one knows only the beliefs over X for every θi ∈ Θ. We also discuss the Disjunctive Rule of Combination (DRC) for distinct pieces of evidence. This rule allows us to compute the belief over X from the beliefs induced by two distinct pieces of evidence when one knows only that one of the pieces of evidence holds. The properties of the DRC and GBT and their uses for belief propagation in directed belief networks are analysed. The use of the discounting factors is justfied. The application of these rules is illustrated by an example of medical diagnosis.
How to make cognitive illusions disappear: Beyond “heuristics and biases
 In W. Stroebe & M. Hewstone (Eds.), European review of social psychology
, 1991
"... Abstract. Most socalled “errors ” in probabilistic reasoning are in fact not violations of probability theory. Examples of such “errors ” include overconfidence bias, conjunction fallacy, and baserate neglect. Researchers have relied on a very narrow normative view, and have ignored conceptual dis ..."
Abstract

Cited by 106 (8 self)
 Add to MetaCart
Abstract. Most socalled “errors ” in probabilistic reasoning are in fact not violations of probability theory. Examples of such “errors ” include overconfidence bias, conjunction fallacy, and baserate neglect. Researchers have relied on a very narrow normative view, and have ignored conceptual distinctions—for example, single case versus relative frequency—fundamental to probability theory. By recognizing and using these distinctions, however, we can make apparently stable “errors ” disappear, reappear, or even invert. I suggest what a reformed understanding of judgments under uncertainty might look like. Two Revolutions Social psychology was transformed by the “cognitive revolution. ” Cognitive imperialism has been both praised (e.g., Strack, 1988) and lamented (e.g., Graumann, 1988). But a second revolution has transformed most of the sciences so fundamentally that it is now hard to see that it could have been different before. It has made concepts such as probability, chance, and uncertainty indispensable for understanding nature, society, and the mind. This sweeping conceptual change has been called the “probabilistic revolution ” (Gigerenzer et al., 1989; Krüger, Daston, & Heidelberger, 1987; Krüger, Gigerenzer, & Morgan, 1987). The probabilistic revolution differs from the cognitive revolution in its genuine novelty and its interdisciplinary scope. Statistical mechanics, Mendelian genetics,
Two views of belief: Belief as generalized probability and belief as evidence
, 1992
"... : Belief functions are mathematical objects defined to satisfy three axioms that look somewhat similar to the Kolmogorov axioms defining probability functions. We argue that there are (at least) two useful and quite different ways of understanding belief functions. The first is as a generalized prob ..."
Abstract

Cited by 72 (12 self)
 Add to MetaCart
: Belief functions are mathematical objects defined to satisfy three axioms that look somewhat similar to the Kolmogorov axioms defining probability functions. We argue that there are (at least) two useful and quite different ways of understanding belief functions. The first is as a generalized probability function (which technically corresponds to the inner measure induced by a probability function). The second is as a way of representing evidence. Evidence, in turn, can be understood as a mapping from probability functions to probability functions. It makes sense to think of updating a belief if we think of it as a generalized probability. On the other hand, it makes sense to combine two beliefs (using, say, Dempster's rule of combination) only if we think of the belief functions as representing evidence. Many previous papers have pointed out problems with the belief function approach; the claim of this paper is that these problems can be explained as a consequence of confounding the...
DomainSpecific Reasoning: Social Contracts, Cheating, and Perspective Change
, 1992
"... What counts as human rationality: reasoning processes that embody contentindependent formal theories, such as propositional logic, or reasoning processes that are well designed for solving important adaptive problems? Most theories of human reasoning have been based on contentindependent formal r ..."
Abstract

Cited by 66 (2 self)
 Add to MetaCart
What counts as human rationality: reasoning processes that embody contentindependent formal theories, such as propositional logic, or reasoning processes that are well designed for solving important adaptive problems? Most theories of human reasoning have been based on contentindependent formal rationality, whereas adaptive reasoning, ecological or evolutionary, has been little explored. We elaborate and test an evolutionary approach, Cosmides’ (1989) social contract theory, using the Wason selection task. In the first part, we disentangle the theoretical concept of a “social contract” from that of a “cheaterdetection algorithm.” We demonstrate that the fact that a rule is perceived as a social contract—or a conditional permission or obligation, as Cheng and Holyoak (1985) proposed—is not sufficient to elicit Cosmides’ striking results, which we replicated. The crucial issue is not semantic (the meaning of the rule), but pragmatic: whether a person is cued into the perspective of a party who can be cheated. In the second part, we distinguish between social contracts with bilateral and unilateral cheating options. Perspective change in contracts with bilateral cheating options turns P & notQ responses into notP & Q responses. The results strongly support social contract theory, contradict availability theory, and cannot be accounted for by pragmatic reasoning schema theory, which lacks the pragmatic concepts of perspectives and cheating detection.
From tools to theories: A heuristic of discovery in cognitive psychology
 Psychological Review
, 1991
"... The study of scientific discovery—where do new ideas come from?—has long been denigrated by philosophers as irrelevant to analyzing the growth of scientific knowledge. In particular, little is known about how cognitive theories are discovered, and neither the classical accounts of discovery as eithe ..."
Abstract

Cited by 39 (11 self)
 Add to MetaCart
The study of scientific discovery—where do new ideas come from?—has long been denigrated by philosophers as irrelevant to analyzing the growth of scientific knowledge. In particular, little is known about how cognitive theories are discovered, and neither the classical accounts of discovery as either probabilistic induction (e.g., Reichenbach, 1938) or lucky guesses (e.g., Popper, 1959), nor the stock anecdotes about sudden “eureka ” moments deepen the insight into discovery. A heuristics approach is taken in this review, where heuristics are understood as strategies of discovery less general than a supposed unique logic of discovery but more general than lucky guesses. This article deals with how scientists’ tools shape theories of mind, in particular with how methods of statistical inference have turned into metaphors of mind. The toolstotheories heuristic explains the emergence of a broad range of cognitive theories, from the cognitive revolution of the 1960s up to the present, and it can be used to detect both limitations and new lines of development in current cognitive theories that investigate the mind as an “intuitive statistician.” Scientific inquiry can be viewed as “an ocean, continuous everywhere and without a break or division ” (Leibniz, 1690/1951, p. 73). Hans Reichenbach (1938) nonetheless divided this ocean into two great seas, the context of discovery and the context of justification. Philosophers, logicians,
Severe Testing as a Basic Concept in a NeymanPearson Philosophy of Induction
 BRITISH JOURNAL FOR THE PHILOSOPHY OF SCIENCE
, 2006
"... Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and longstanding problems of N–P tests s ..."
Abstract

Cited by 36 (14 self)
 Add to MetaCart
Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and longstanding problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test’s (predata) error probabilities are to be used for (postdata) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities is to ensure that only statistical hypotheses that have passed severe or probative tests are inferred from the data. The severity criterion supplies a metastatistical principle for evaluating proposed statistical inferences, avoiding classic fallacies from tests that are overly sensitive, as well as those not sensitive enough to particular errors and discrepancies.
Could Fisher, Jeffreys, and Neyman Have Agreed on Testing?
, 2002
"... Ronald Fisher advocated testing using pvalues; Harold Jeffreys proposed use of objective posterior probabilities of hypotheses; and Jerzy Neyman recommended testing with fixed error probabilities. Each was quite critical of the other approaches. ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
Ronald Fisher advocated testing using pvalues; Harold Jeffreys proposed use of objective posterior probabilities of hypotheses; and Jerzy Neyman recommended testing with fixed error probabilities. Each was quite critical of the other approaches.
The Theoretical Status of Latent Variables
 Psychological Review
, 2003
"... This article examines the theoretical status of latent variables as used in modern test theory models. First, it is argued that a consistent interpretation of such models requires a realist ontology for latent variables. Second, the relation between latent variables and their indicators is discussed ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
This article examines the theoretical status of latent variables as used in modern test theory models. First, it is argued that a consistent interpretation of such models requires a realist ontology for latent variables. Second, the relation between latent variables and their indicators is discussed. It is maintained that this relation can be interpreted as a causal one but that in measurement models for interindividual differences the relation does not apply to the level of the individual person. To substantiate intraindividual causal conclusions, one must explicitly represent individual level processes in the measurement model. Several research strategies that may be useful in this respect are discussed, and a typology of constructs is proposed on the basis of this analysis. The need to link individual processes to latent variable models for interindividual differences is emphasized. Consider the following sentence: “Einstein would not have been able to come up with his e � mc 2 had he not possessed such an extraordinary intelligence. ” What does this sentence express? It relates observable behavior (Einstein’s writing e � mc 2)toan unobservable attribute (his extraordinary intelligence), and it does so by assigning to the unobservable attribute a causal role in