Results 1  10
of
72
Agentbased computational models and generative social science
 Complexity
, 1999
"... This article argues that the agentbased computational model permits a distinctive approach to social science for which the term “generative ” is suitable. In defending this terminology, features distinguishing the approach from both “inductive ” and “deductive ” science are given. Then, the followi ..."
Abstract

Cited by 64 (0 self)
 Add to MetaCart
This article argues that the agentbased computational model permits a distinctive approach to social science for which the term “generative ” is suitable. In defending this terminology, features distinguishing the approach from both “inductive ” and “deductive ” science are given. Then, the following specific contributions to social science are discussed: The agentbased computational model is a new tool for empirical research. It offers a natural environment for the study of connectionist phenomena in social science. Agentbased modeling provides a powerful way to address certain enduring—and especially interdisciplinary—questions. It allows one to subject certain core theories—such as neoclassical microeconomics—to important types of stress (e.g., the effect of evolving preferences). It permits one to study how rules of individual behavior give rise—or “map up”—to macroscopic regularities and organizations. In turn, one can employ laboratory behavioral research findings to select among competing agentbased (“bottom up”) models. The agentbased approach may well have the important effect of decoupling individual rationality from macroscopic equilibrium and of separating decision science from social science more generally. Agentbased modeling offers powerful new forms of hybrid theoreticalcomputational work; these are particularly relevant to the study of nonequilibrium systems. The agentbased approach invites the interpretation of society as a distributed computational device, and in turn the interpretation of social dynamics as a type of computation. This interpretation raises important foundational issues in social science—some related to intractability, and some to undecidability proper. Finally, since “emergence” figures prominently in this literature, I take up the connection between agentbased modeling and classical emergentism, criticizing the latter and arguing that the two are incompatible. � 1999 John Wiley &
Severe Testing as a Basic Concept in a NeymanPearson Philosophy of Induction
 BRITISH JOURNAL FOR THE PHILOSOPHY OF SCIENCE
, 2006
"... Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and longstanding problems of N–P tests s ..."
Abstract

Cited by 35 (14 self)
 Add to MetaCart
Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and longstanding problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test’s (predata) error probabilities are to be used for (postdata) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities is to ensure that only statistical hypotheses that have passed severe or probative tests are inferred from the data. The severity criterion supplies a metastatistical principle for evaluating proposed statistical inferences, avoiding classic fallacies from tests that are overly sensitive, as well as those not sensitive enough to particular errors and discrepancies.
Philosophy and the practice of Bayesian statistics
, 2010
"... A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypotheticodeductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.
2007b)How Simplicity Helps You Find the Truth Without Pointing at it
 Philosophy of Mathematics and Induction
"... It seems that a fixed bias toward simplicity should help one find the truth, since scientific theorizing is guided by such a bias. But it also seems that a fixed bias toward simplicity cannot indicate or point at the truth, since an indicator has to be sensitive to what it indicates. I argue that bo ..."
Abstract

Cited by 10 (9 self)
 Add to MetaCart
It seems that a fixed bias toward simplicity should help one find the truth, since scientific theorizing is guided by such a bias. But it also seems that a fixed bias toward simplicity cannot indicate or point at the truth, since an indicator has to be sensitive to what it indicates. I argue that both views are correct. It is demonstrated, for a broad range of cases, that the Ockham strategy of favoring the simplest hypothesis, together with the strategy of never dropping the simplest hypothesis until it is no longer simplest, uniquely minimizes reversals of opinion and the times at which the reversals occur prior to convergence to the truth. Thus, simplicity guides one down the straightest path to the truth, even though that path may involve twists and turns along the way. The proof does not appeal to prior probabilities biased toward simplicity. Instead, it is based upon minimization of worstcase cost bounds over complexity classes of possibilities. 0.1 The Simplicity Puzzle There are infinitely many alternative hypotheses consistent with any finite amount of experience, so how is one entitled to choose among them? Scientists boldly respond with appeals to “Ockham’s razor”, which selects the “simplest ” hypothesis among them,
2002, ‘Putting the irrelevance back into the problem of irrelevant conjunction
 Philosophy of Science
"... Naive deductive accounts of confirmation have the undesirable consequence that if E confirms H, thenE also confirms the conjunction H & X, for any X — even if X is utterly irrelevant to H (and E). Bayesian accounts of confirmation also have this property (in the case of deductive evidence). Several ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
Naive deductive accounts of confirmation have the undesirable consequence that if E confirms H, thenE also confirms the conjunction H & X, for any X — even if X is utterly irrelevant to H (and E). Bayesian accounts of confirmation also have this property (in the case of deductive evidence). Several Bayesians have attempted to soften the impact of this fact by arguing that — according to Bayesian accounts of confirmation — E will confirm the conjunction H &X less strongly than E confirms H (again, in the case of deductive evidence). I argue that existing Bayesian “resolutions ” of this problem are inadequate in several important respects. In the end, I suggest a newandimproved Bayesian account (and understanding) of the problem of irrelevant conjunction.
Ockham’s Razor, Empirical Complexity, and Truthfinding Efficiency
 THEORETICAL COMPUTER SCIENCE
, 2007
"... The nature of empirical simplicity and its relationship to scientific truth are longstanding puzzles. In this paper, empirical simplicity is explicated in terms of empirical effects, which are defined in terms of the structure of the inference problem addressed. Problem instances are classified acc ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
The nature of empirical simplicity and its relationship to scientific truth are longstanding puzzles. In this paper, empirical simplicity is explicated in terms of empirical effects, which are defined in terms of the structure of the inference problem addressed. Problem instances are classified according to the number of empirical effects they present. Simple answers are satisfied by simple worlds. An efficient solution achieves optimum worstcase cost over each complexity class with respect to such costs such as the number of retractions or errors prior to convergence and elapsed time to convergence. It is shown that always choosing the simplest theory compatible with experience and hanging onto it while it remains simplest is both necessary and sufficient for efficiency.
An abductive theory of scientific method
 Psychological Methods
, 2005
"... A broad theory of scientific method is sketched that has particular relevance for the behavioral sciences. This theory of method assembles a complex of specific strategies and methods that are used in the detection of empirical phenomena and the subsequent construction of explanatory theories. A cha ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
A broad theory of scientific method is sketched that has particular relevance for the behavioral sciences. This theory of method assembles a complex of specific strategies and methods that are used in the detection of empirical phenomena and the subsequent construction of explanatory theories. A characterization of the nature of phenomena is given, and the process of their detection is briefly described in terms of a multistage model of data analysis. The construction of explanatory theories is shown to involve their generation through abductive, or explanatory, reasoning, their development through analogical modeling, and their fuller appraisal in terms of judgments of the best of competing explanations. The nature and limits of this theory of method are discussed in the light of relevant developments in scientific methodology.
CurveFitting, the Reliability of Inductive Inference and the ErrorStatistical Approach,” forthcoming Philosophy of Science
"... The main aim of this paper is to revisit the curvefitting problem using the reliability of inductive inference as a primary criterion for the ‘fittest ’ curve. Viewed from this perspective, it is argued that a crucial concern with the current framework for addressing the curvefitting problem is, o ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
The main aim of this paper is to revisit the curvefitting problem using the reliability of inductive inference as a primary criterion for the ‘fittest ’ curve. Viewed from this perspective, it is argued that a crucial concern with the current framework for addressing the curvefitting problem is, on the one hand, the undue influence of the mathematical approximation perspective, and on the other, the insufficient attention paid to the statistical modeling aspects of the problem. Using goodnessoffit as the primary criterion for best, the mathematical approximation perspective undermines the reliability of inference objective by giving rise to selection rules which pay insufficient attention to ‘capturing the regularities in the data’. A more appropriate framework is offered by the errorstatistical approach, where(i)statistical adequacy provides the criterion for assessing when a curve captures the regularities in the data adequately, and (ii) the relevant error probabilities canbeusedtoassessthereliabilityof inductive inference. Broadly speaking, the fittest curve (statistically adequate) is not determined by the smallness if its residuals, tempered by simplicity or other pragmatic criteria, but by the nonsystematic (e.g. whitenoise) nature of its residuals. The advocated errorstatistical arguments are illustrated by comparing the Kepler and Ptolemaic models on empirical grounds. ∗ Forthcoming in Philosophy of Science, 2007. † I’m grateful to Deborah Mayo and Clark Glymour for many valuable suggestions and comments on an earlier draft of the paper; estimating the Ptolemaic model was the result of Glymour’s prompting and encouragement. 1 1
Infinitely Many Resolutions of Hempel's Paradox
, 1993
"... What sorts of observations could confirm the universal hypothesis that all ravens are black? Carl Hempel proposed a number of simple and plausible principles which had the odd ("paradoxical") result that not only do observations of black ravens confirm that hypothesis, but so too do obser ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
What sorts of observations could confirm the universal hypothesis that all ravens are black? Carl Hempel proposed a number of simple and plausible principles which had the odd ("paradoxical") result that not only do observations of black ravens confirm that hypothesis, but so too do observations of yellow suns, green seas and white shoes. Hempel's response to his own paradox was to call it a psychological illusioni.e., white shoes do indeed confirm that all ravens are black. Karl Popper on the other hand needed no response: he claimed that no observation can confirm any general statementthere is no such thing as confirmation theory. Instead, we should be looking for severe tests of our theories, strong attempts to falsify them. Bayesian philosophers have (in a loose sense) followed the Popperian analysis of Hempel's paradox (while retaining confirmation theory): they have usually judged that observing a white shoe in a shoe store does not qualify as a severe test of the hypothesis and so, while providing Bayesian confirmation, does so to only a minute degree. This rationalizes our common intuition of nonconfirmation. All of these responses to the paradox are demonstrably wronggranting an ordinary Bayesian measure of confirmation. A proper Bayesian analysis reveals that observations of white shoes may provide the raven hypothesis any degree of confirmation whatsoever.