Results 1  10
of
163
How to improve Bayesian reasoning without instruction: Frequency formats
 Psychological Review
, 1995
"... Is the mind, by design, predisposed against performing Bayesian inference? Previous research on base rate neglect suggests that the mind lacks the appropriate cognitive algorithms. However, any claim against the existence of an algorithm, Bayesian or otherwise, is impossible to evaluate unless one s ..."
Abstract

Cited by 380 (28 self)
 Add to MetaCart
(Show Context)
Is the mind, by design, predisposed against performing Bayesian inference? Previous research on base rate neglect suggests that the mind lacks the appropriate cognitive algorithms. However, any claim against the existence of an algorithm, Bayesian or otherwise, is impossible to evaluate unless one specifies the information format in which it is designed to operate. The authors show that Bayesian algorithms are computationally simpler in frequency formats than in the probability formats used in previous research. Frequency formats correspond to the sequential way information is acquired in natural sampling, from animal foraging to neural networks. By analyzing several thousand solutions to Bayesian problems, the authors found that when information was presented in frequency formats, statistically naive participants derived up to 50 % of all inferences by Bayesian algorithms. NonBayesian algorithms included simple versions of Fisherian and NeymanPearsonian inference. Is the mind, by design, predisposed against performing Bayesian inference? The classical probabilists of the Enlightenment, including Condorcet, Poisson, and Laplace, equated probability theory with the common sense of educated people, who were known then as “hommes éclairés.” Laplace (1814/1951) declared that “the theory of probability is at bottom nothing more than good sense reduced to a calculus which evaluates that which good minds know by a sort of instinct,
Similarity and induction
 Review of Philosophy and Psychology
, 2010
"... An argument is categorical if its premises and conclusion are of the form All members ofC have property F, where C is a natural category like FALCON or BIRD, and P remains the same across premises and conclusion. An example is Grizzly bears love onions. Therefore, all bears love onions. Such an argu ..."
Abstract

Cited by 249 (10 self)
 Add to MetaCart
An argument is categorical if its premises and conclusion are of the form All members ofC have property F, where C is a natural category like FALCON or BIRD, and P remains the same across premises and conclusion. An example is Grizzly bears love onions. Therefore, all bears love onions. Such an argument is psychologically strong to the extent that belief in its premises engenders belief in its conclusion. A subclass of categorical arguments is examined, and the following hypothesis is advanced: The strength of a categorical argument increases with (a) the degree to which the premise categories are similar to the conclusion category and (b) the degree to which the premise categories are similar to members of the lowest level category that includes both the premise and the conclusion categories. A model based on this hypothesis accounts for 13 qualitative phenomena and the quantitative results of several experiments. The Problem of Argument Strength Fundamental to human thought is the confirmation relation, joining sentences P,... Pn to another sentence C just in case belief in the former leads to belief in the latter. Theories of confirmation may be cast in the terminology of argument strength,
Betting on Theories
, 1993
"... Predictions about the future and unrestricted universal generalizations are never logically implied by our observational evidence, which is limited to particular facts in the present and past. Nevertheless, propositions of these and other kinds are often said to be confirmed by observational evidenc ..."
Abstract

Cited by 102 (4 self)
 Add to MetaCart
Predictions about the future and unrestricted universal generalizations are never logically implied by our observational evidence, which is limited to particular facts in the present and past. Nevertheless, propositions of these and other kinds are often said to be confirmed by observational evidence. A natural place to begin the study of confirmation theory is to consider what it means to say that some evidence E confirms a hypothesis H. Incremental and absolute confirmation Let us say that E raises the probability of H if the probability of H given E is higher than the probability of H not given E. According to many confirmation theorists, “E confirms H ” means that E raises the probability of H. This conception of confirmation will be called incremental confirmation. Let us say that H is probable given E if the probability of H given E is above some threshold. (This threshold remains to be specified but is assumed to be at least one half.) According to some confirmation theorists, “E confirms H ” means that H is probable given E. This conception of confirmation will be called absolute confirmation. Confirmation theorists have sometimes failed to distinguish these two concepts. For example, Carl Hempel in his classic “Studies in the Logic of Confirmation ” endorsed the following principles: (1) A generalization of the form “All F are G ” is confirmed by the evidence that there is an individual that is both F and G. (2) A generalization of that form is also confirmed by the evidence that there is an individual that is neither F nor G. (3) The hypotheses confirmed by a piece of evidence are consistent with one another. (4) If E confirms H then E confirms every logical consequence of H. Principles (1) and (2) are not true of absolute confirmation. Observation of a single thing that is F and G cannot in general make it probable that all F are G; likewise for an individual that is neither
Reconciling simplicity and likelihood principles in perceptual organization
 Psychological Review
, 1996
"... Two principles of perceptual organization have been proposed. The likelihood principle, following H. L. E yon Helmholtz ( 1910 / 1962), proposes that perceptual organization is chosen to correspond to the most likely distal layout. The simplicity principle, following Gestalt psychology, suggests tha ..."
Abstract

Cited by 86 (17 self)
 Add to MetaCart
(Show Context)
Two principles of perceptual organization have been proposed. The likelihood principle, following H. L. E yon Helmholtz ( 1910 / 1962), proposes that perceptual organization is chosen to correspond to the most likely distal layout. The simplicity principle, following Gestalt psychology, suggests that perceptual organization is chosen to be as simple as possible. The debate between these two views has been a central topic in the study of perceptual organization. Drawing on mathematical results in A. N. Kolmogorov's ( 1965)complexity heory, the author argues that simplicity and likelihood are not in competition, but are identical. Various implications for the theory of perceptual organization and psychology more generally are outlined. How does the perceptual system derive a complex and structured description of the perceptual world from patterns of activity at the sensory receptors? Two apparently competing theories of perceptual organization have been influential. The first, initiated by Helmholtz ( 1910/1962), advocates the likelihood principle: Sensory input will be organized into the most probable distal object or event consistent with that input. The second, initiated by Wertheimer and developed by other Gestalt psychologists, advocates what Pomerantz and Kubovy (1986) called the simplicity principle: The perceptual system is viewed as finding the simplest, rather than the most likely, perceptual organization consistent with the sensory input '. There has been considerable theoretical nd empirical controversy concerning whether likelihood or simplicity is the governing principle of perceptual organization (e.g., Hatfield, &
Severe Testing as a Basic Concept in a NeymanPearson Philosophy of Induction
 BRITISH JOURNAL FOR THE PHILOSOPHY OF SCIENCE
, 2006
"... Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and longstanding problems of N–P tests s ..."
Abstract

Cited by 50 (21 self)
 Add to MetaCart
Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and longstanding problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test’s (predata) error probabilities are to be used for (postdata) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities is to ensure that only statistical hypotheses that have passed severe or probative tests are inferred from the data. The severity criterion supplies a metastatistical principle for evaluating proposed statistical inferences, avoiding classic fallacies from tests that are overly sensitive, as well as those not sensitive enough to particular errors and discrepancies.
The plurality of Bayesian measures of confirmation and the problemof measure sensitivity
 Philosophy of Science 66 (Proceedings), S362–S378
, 1999
"... Contemporary Bayesian confirmation theorists measure degree of (incremental) confirmation using a variety of nonequivalent relevance measures. As a result, a great many of the arguments surrounding quantitative Bayesian confirmation theory are implicitly sensitive to choice of measure of confirmati ..."
Abstract

Cited by 47 (12 self)
 Add to MetaCart
(Show Context)
Contemporary Bayesian confirmation theorists measure degree of (incremental) confirmation using a variety of nonequivalent relevance measures. As a result, a great many of the arguments surrounding quantitative Bayesian confirmation theory are implicitly sensitive to choice of measure of confirmation. Such arguments are enthymematic, since they tacitly presuppose that certain relevance measures should be used (for various purposes) rather than other relevance measures that have been proposed and defended in the philosophical literature. I present a survey of this pervasive class of Bayesian confirmationtheoretic enthymemes, and a brief analysis of some recent attempts to resolve the problem of measure sensitivity. 1 Preliminaries. 1.1 Terminology, Notation, and Basic Assumptions The present paper is concerned with the degree of incremental confirmation provided by evidential propositions E for hypotheses under test H, givenbackground knowledge K, according to relevance measures of degree of confirmation c. Wesaythatc is a relevance measure of degree of confirmation if and only if c satisfies the following constraints, in cases where E confirms, disconfirms, or is confirmationally irrelevant to H, given background knowledge K. 1
On Universal Prediction and Bayesian Confirmation
 Theoretical Computer Science
, 2007
"... The Bayesian framework is a wellstudied and successful framework for inductive reasoning, which includes hypothesis testing and confirmation, parameter estimation, sequence prediction, classification, and regression. But standard statistical guidelines for choosing the model class and prior are not ..."
Abstract

Cited by 30 (14 self)
 Add to MetaCart
(Show Context)
The Bayesian framework is a wellstudied and successful framework for inductive reasoning, which includes hypothesis testing and confirmation, parameter estimation, sequence prediction, classification, and regression. But standard statistical guidelines for choosing the model class and prior are not always available or can fail, in particular in complex situations. Solomonoff completed the Bayesian framework by providing a rigorous, unique, formal, and universal choice for the model class and the prior. I discuss in breadth how and in which sense universal (noni.i.d.) sequence prediction solves various (philosophical) problems of traditional Bayesian sequence prediction. I show that Solomonoff’s model possesses many desirable properties: Strong total and future bounds, and weak instantaneous bounds, and in contrast to most classical continuous prior densities has no zero p(oste)rior problem, i.e. can confirm universal hypotheses, is reparametrization and regrouping invariant, and avoids the oldevidence and updating problem. It even performs well
A Bayesian view of covariation assessment
, 2007
"... When participants assess the relationship between two variables, each with levels of presence and absence, the two most robust phenomena are that: (a) observing the joint presence of the variables has the largest impact on judgment and observing joint absence has the smallest impact, and (b) partici ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
When participants assess the relationship between two variables, each with levels of presence and absence, the two most robust phenomena are that: (a) observing the joint presence of the variables has the largest impact on judgment and observing joint absence has the smallest impact, and (b) participants’ prior beliefs about the variables ’ relationship influence judgment. Both phenomena represent departures from the traditional normative model (the phi coefficient or related measures) and have therefore been interpreted as systematic errors. However, both phenomena are consistent with a Bayesian approach to the task. From a Bayesian perspective: (a) joint presence is normatively more informative than joint absence if the presence of variables is rarer than their absence, and (b) failing to incorporate prior beliefs is a normative error. Empirical evidence is reported showing that joint absence is seen as more informative than joint presence when it is clear that absence of the variables, rather than their presence, is rare.
A Bayesian Account of Independent Evidence with Application
 Philosophy of Science 68 (Proceedings): S123S140
, 2001
"... Abstract. A Bayesian account of independent evidential support is outlined. This account is partly inspired by the work of C.S. Peirce. I show that a large class of quantitative Bayesian measures of confirmation satisfy some basic desiderata suggested by Peirce for adequate accounts of independent e ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
Abstract. A Bayesian account of independent evidential support is outlined. This account is partly inspired by the work of C.S. Peirce. I show that a large class of quantitative Bayesian measures of confirmation satisfy some basic desiderata suggested by Peirce for adequate accounts of independent evidence. I argue that, by considering further natural constraints on a probabilistic account of independent evidence, all but a very small class of Bayesian measures of confirmation can be ruled out. In closing, another application of my account to the problemof evidential diversity is also discussed. 1 Terminology, Notation & Basic Assumptions The present paper is concerned with the degree of incremental confirmation provided by evidential propositions E for hypotheses under test H, givenbackground evidence K, according to relevance measures of degree of confirmation c. Wesaythatcis a relevance measure of degree of confirmation if and only if c satisfies the following constraints, in cases where E confirms, disconfirms, or is confirmationally irrelevant to H, given background evidence K.