Results 1  10
of
85
How to improve Bayesian reasoning without instruction: Frequency formats
 Psychological Review
, 1995
"... Is the mind, by design, predisposed against performing Bayesian inference? Previous research on base rate neglect suggests that the mind lacks the appropriate cognitive algorithms. However, any claim against the existence of an algorithm, Bayesian or otherwise, is impossible to evaluate unless one s ..."
Abstract

Cited by 220 (21 self)
 Add to MetaCart
Is the mind, by design, predisposed against performing Bayesian inference? Previous research on base rate neglect suggests that the mind lacks the appropriate cognitive algorithms. However, any claim against the existence of an algorithm, Bayesian or otherwise, is impossible to evaluate unless one specifies the information format in which it is designed to operate. The authors show that Bayesian algorithms are computationally simpler in frequency formats than in the probability formats used in previous research. Frequency formats correspond to the sequential way information is acquired in natural sampling, from animal foraging to neural networks. By analyzing several thousand solutions to Bayesian problems, the authors found that when information was presented in frequency formats, statistically naive participants derived up to 50 % of all inferences by Bayesian algorithms. NonBayesian algorithms included simple versions of Fisherian and NeymanPearsonian inference. Is the mind, by design, predisposed against performing Bayesian inference? The classical probabilists of the Enlightenment, including Condorcet, Poisson, and Laplace, equated probability theory with the common sense of educated people, who were known then as “hommes éclairés.” Laplace (1814/1951) declared that “the theory of probability is at bottom nothing more than good sense reduced to a calculus which evaluates that which good minds know by a sort of instinct,
Similarity and induction
 Review of Philosophy and Psychology
, 2010
"... An argument is categorical if its premises and conclusion are of the form All members ofC have property F, where C is a natural category like FALCON or BIRD, and P remains the same across premises and conclusion. An example is Grizzly bears love onions. Therefore, all bears love onions. Such an argu ..."
Abstract

Cited by 164 (8 self)
 Add to MetaCart
An argument is categorical if its premises and conclusion are of the form All members ofC have property F, where C is a natural category like FALCON or BIRD, and P remains the same across premises and conclusion. An example is Grizzly bears love onions. Therefore, all bears love onions. Such an argument is psychologically strong to the extent that belief in its premises engenders belief in its conclusion. A subclass of categorical arguments is examined, and the following hypothesis is advanced: The strength of a categorical argument increases with (a) the degree to which the premise categories are similar to the conclusion category and (b) the degree to which the premise categories are similar to members of the lowest level category that includes both the premise and the conclusion categories. A model based on this hypothesis accounts for 13 qualitative phenomena and the quantitative results of several experiments. The Problem of Argument Strength Fundamental to human thought is the confirmation relation, joining sentences P,... Pn to another sentence C just in case belief in the former leads to belief in the latter. Theories of confirmation may be cast in the terminology of argument strength,
Betting on Theories
, 1993
"... Predictions about the future and unrestricted universal generalizations are never logically implied by our observational evidence, which is limited to particular facts in the present and past. Nevertheless, propositions of these and other kinds are often said to be confirmed by observational evidenc ..."
Abstract

Cited by 70 (4 self)
 Add to MetaCart
Predictions about the future and unrestricted universal generalizations are never logically implied by our observational evidence, which is limited to particular facts in the present and past. Nevertheless, propositions of these and other kinds are often said to be confirmed by observational evidence. A natural place to begin the study of confirmation theory is to consider what it means to say that some evidence E confirms a hypothesis H. Incremental and absolute confirmation Let us say that E raises the probability of H if the probability of H given E is higher than the probability of H not given E. According to many confirmation theorists, “E confirms H ” means that E raises the probability of H. This conception of confirmation will be called incremental confirmation. Let us say that H is probable given E if the probability of H given E is above some threshold. (This threshold remains to be specified but is assumed to be at least one half.) According to some confirmation theorists, “E confirms H ” means that H is probable given E. This conception of confirmation will be called absolute confirmation. Confirmation theorists have sometimes failed to distinguish these two concepts. For example, Carl Hempel in his classic “Studies in the Logic of Confirmation ” endorsed the following principles: (1) A generalization of the form “All F are G ” is confirmed by the evidence that there is an individual that is both F and G. (2) A generalization of that form is also confirmed by the evidence that there is an individual that is neither F nor G. (3) The hypotheses confirmed by a piece of evidence are consistent with one another. (4) If E confirms H then E confirms every logical consequence of H. Principles (1) and (2) are not true of absolute confirmation. Observation of a single thing that is F and G cannot in general make it probable that all F are G; likewise for an individual that is neither
Distinguishability and Accessible Information in Quantum Theory
, 1996
"... my mother Geraldine, who has been with me every day of my life and to my brother Mike, who gave me a frontier to look toward ii ACKNOWLEDGEMENTS No one deserves more thanks for the success of this work than my advisor and friend Carlton Caves. Carl is the model of the American work ethic applied to ..."
Abstract

Cited by 50 (5 self)
 Add to MetaCart
my mother Geraldine, who has been with me every day of my life and to my brother Mike, who gave me a frontier to look toward ii ACKNOWLEDGEMENTS No one deserves more thanks for the success of this work than my advisor and friend Carlton Caves. Carl is the model of the American work ethic applied to physical thought. The opportunity to watch him in action has fashioned my way of thought, both in the scientific and the secular. He has been a valued teacher, and I hope my three years in Albuquerque have left me with even a few of his qualities. Special thanks go to Greg Comer, my philosophical companion. Greg’s influence on this dissertation was from a distance, but no less great because of that. Much of the viewpoint espoused here was worked out in conversation with him. I thank the home team, Howard Barnum, Sam Braunstein, Gary Herling, Richard Jozsa, Rüdiger Schack, and Ben Schumacher for patiently and critically listening to so many of my ideas—
Severe Testing as a Basic Concept in a NeymanPearson Philosophy of Induction
 BRITISH JOURNAL FOR THE PHILOSOPHY OF SCIENCE
, 2006
"... Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and longstanding problems of N–P tests s ..."
Abstract

Cited by 35 (14 self)
 Add to MetaCart
Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and longstanding problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test’s (predata) error probabilities are to be used for (postdata) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities is to ensure that only statistical hypotheses that have passed severe or probative tests are inferred from the data. The severity criterion supplies a metastatistical principle for evaluating proposed statistical inferences, avoiding classic fallacies from tests that are overly sensitive, as well as those not sensitive enough to particular errors and discrepancies.
The plurality of Bayesian measures of confirmation and the problemof measure sensitivity
 Philosophy of Science 66 (Proceedings), S362–S378
, 1999
"... Contemporary Bayesian confirmation theorists measure degree of (incremental) confirmation using a variety of nonequivalent relevance measures. As a result, a great many of the arguments surrounding quantitative Bayesian confirmation theory are implicitly sensitive to choice of measure of confirmati ..."
Abstract

Cited by 32 (11 self)
 Add to MetaCart
Contemporary Bayesian confirmation theorists measure degree of (incremental) confirmation using a variety of nonequivalent relevance measures. As a result, a great many of the arguments surrounding quantitative Bayesian confirmation theory are implicitly sensitive to choice of measure of confirmation. Such arguments are enthymematic, since they tacitly presuppose that certain relevance measures should be used (for various purposes) rather than other relevance measures that have been proposed and defended in the philosophical literature. I present a survey of this pervasive class of Bayesian confirmationtheoretic enthymemes, and a brief analysis of some recent attempts to resolve the problem of measure sensitivity. 1 Preliminaries. 1.1 Terminology, Notation, and Basic Assumptions The present paper is concerned with the degree of incremental confirmation provided by evidential propositions E for hypotheses under test H, givenbackground knowledge K, according to relevance measures of degree of confirmation c. Wesaythatc is a relevance measure of degree of confirmation if and only if c satisfies the following constraints, in cases where E confirms, disconfirms, or is confirmationally irrelevant to H, given background knowledge K. 1
On Universal Prediction and Bayesian Confirmation
 Theoretical Computer Science
, 2007
"... The Bayesian framework is a wellstudied and successful framework for inductive reasoning, which includes hypothesis testing and confirmation, parameter estimation, sequence prediction, classification, and regression. But standard statistical guidelines for choosing the model class and prior are not ..."
Abstract

Cited by 22 (13 self)
 Add to MetaCart
The Bayesian framework is a wellstudied and successful framework for inductive reasoning, which includes hypothesis testing and confirmation, parameter estimation, sequence prediction, classification, and regression. But standard statistical guidelines for choosing the model class and prior are not always available or can fail, in particular in complex situations. Solomonoff completed the Bayesian framework by providing a rigorous, unique, formal, and universal choice for the model class and the prior. I discuss in breadth how and in which sense universal (noni.i.d.) sequence prediction solves various (philosophical) problems of traditional Bayesian sequence prediction. I show that Solomonoff’s model possesses many desirable properties: Strong total and future bounds, and weak instantaneous bounds, and in contrast to most classical continuous prior densities has no zero p(oste)rior problem, i.e. can confirm universal hypotheses, is reparametrization and regrouping invariant, and avoids the oldevidence and updating problem. It even performs well
Efficient convergence implies Ockham’s Razor
 Proceedings of the 2002 International Workshop on Computational Models of Scientific Reasoning and Applications, Las Vegas
, 2002
"... A finite data set is consistent with infinitely many alternative theories. Scientific realists recommend that we prefer the simplest one. Antirealists ask how a fixed simplicity bias could track the truth when the truth might be complex. It is no solution to impose a prior probability distribution ..."
Abstract

Cited by 17 (14 self)
 Add to MetaCart
A finite data set is consistent with infinitely many alternative theories. Scientific realists recommend that we prefer the simplest one. Antirealists ask how a fixed simplicity bias could track the truth when the truth might be complex. It is no solution to impose a prior probability distribution biased toward simplicity, for such a distribution merely embodies the bias at issue without explaining its efficacy. In this note, I argue, on the basis of computational learning theory, that a fixed simplicity bias is necessary if inquiry is to converge to the right answer efficiently, whatever the right answer might be. Efficiency is understood in the sense of minimizing the least fixed bound on retractions or errors prior to convergence. Keywords: learning, induction, simplicity, Ockham’s razor, realism, skepticism 1
A Bayesian Account of Independent Evidence with Application
 Philosophy of Science 68 (Proceedings): S123S140
, 2001
"... Abstract. A Bayesian account of independent evidential support is outlined. This account is partly inspired by the work of C.S. Peirce. I show that a large class of quantitative Bayesian measures of confirmation satisfy some basic desiderata suggested by Peirce for adequate accounts of independent e ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
Abstract. A Bayesian account of independent evidential support is outlined. This account is partly inspired by the work of C.S. Peirce. I show that a large class of quantitative Bayesian measures of confirmation satisfy some basic desiderata suggested by Peirce for adequate accounts of independent evidence. I argue that, by considering further natural constraints on a probabilistic account of independent evidence, all but a very small class of Bayesian measures of confirmation can be ruled out. In closing, another application of my account to the problemof evidential diversity is also discussed. 1 Terminology, Notation & Basic Assumptions The present paper is concerned with the degree of incremental confirmation provided by evidential propositions E for hypotheses under test H, givenbackground evidence K, according to relevance measures of degree of confirmation c. Wesaythatcis a relevance measure of degree of confirmation if and only if c satisfies the following constraints, in cases where E confirms, disconfirms, or is confirmationally irrelevant to H, given background evidence K.
Bayes not Bust! Why Simplicity is no Problem for Bayesians
, 2007
"... The advent of formal definitions of the simplicity of a theory has important implications for model selection. But what is the best way to define simplicity? Forster and Sober ([1994]) advocate the use of Akaike’s Information Criterion (AIC), a nonBayesian formalisation of the notion of simplicity. ..."
Abstract

Cited by 13 (10 self)
 Add to MetaCart
The advent of formal definitions of the simplicity of a theory has important implications for model selection. But what is the best way to define simplicity? Forster and Sober ([1994]) advocate the use of Akaike’s Information Criterion (AIC), a nonBayesian formalisation of the notion of simplicity. This forms an important part of their wider attack on Bayesianism in the philosophy of science. We defend a Bayesian alternative: the simplicity of a theory is to be characterised in terms of Wallace’s Minimum Message Length (MML). We show that AIC is inadequate for many statistical problems where MML performs well. Whereas MML is always defined, AIC can be undefined. Whereas MML is not known ever to be statistically inconsistent, AIC can be. Even when defined and consistent, AIC performs worse than MML on small sample sizes. MML is statistically invariant under 1to1 reparametrisation, thus avoiding a common criticism of Bayesian approaches. We also show that MML provides answers to many of Forster’s objections to Bayesianism. Hence an important part of the attack on