Results 1  10
of
42
How to improve Bayesian reasoning without instruction: Frequency formats
 Psychological Review
, 1995
"... Is the mind, by design, predisposed against performing Bayesian inference? Previous research on base rate neglect suggests that the mind lacks the appropriate cognitive algorithms. However, any claim against the existence of an algorithm, Bayesian or otherwise, is impossible to evaluate unless one s ..."
Abstract

Cited by 220 (21 self)
 Add to MetaCart
Is the mind, by design, predisposed against performing Bayesian inference? Previous research on base rate neglect suggests that the mind lacks the appropriate cognitive algorithms. However, any claim against the existence of an algorithm, Bayesian or otherwise, is impossible to evaluate unless one specifies the information format in which it is designed to operate. The authors show that Bayesian algorithms are computationally simpler in frequency formats than in the probability formats used in previous research. Frequency formats correspond to the sequential way information is acquired in natural sampling, from animal foraging to neural networks. By analyzing several thousand solutions to Bayesian problems, the authors found that when information was presented in frequency formats, statistically naive participants derived up to 50 % of all inferences by Bayesian algorithms. NonBayesian algorithms included simple versions of Fisherian and NeymanPearsonian inference. Is the mind, by design, predisposed against performing Bayesian inference? The classical probabilists of the Enlightenment, including Condorcet, Poisson, and Laplace, equated probability theory with the common sense of educated people, who were known then as “hommes éclairés.” Laplace (1814/1951) declared that “the theory of probability is at bottom nothing more than good sense reduced to a calculus which evaluates that which good minds know by a sort of instinct,
The earth is round (p < .05
 American Psychologist
, 1994
"... After 4 decades of severe criticism, the ritual of null hypothesis significance testing—mechanical dichotomous decisions around a sacred.05 criterion—still persists. This article reviews the problems with this practice, including its nearuniversal misinterpretation ofp as the probability that Ho is ..."
Abstract

Cited by 116 (0 self)
 Add to MetaCart
After 4 decades of severe criticism, the ritual of null hypothesis significance testing—mechanical dichotomous decisions around a sacred.05 criterion—still persists. This article reviews the problems with this practice, including its nearuniversal misinterpretation ofp as the probability that Ho is false, the misinterpretation that its complement is the probability of successful replication, and the mistaken assumption that if one rejects Ho one thereby affirms the theory that led to the test. Exploratory data analysis and the use of graphic methods, a steady improvement in and a movement toward standardization in measurement, an emphasis on estimating effect sizes using confidence intervals, and the informed use of available statistical methods is suggested. For generalization, psychologists must finally rely, as has been done in all the older sciences,
From tools to theories: A heuristic of discovery in cognitive psychology
 Psychological Review
, 1991
"... The study of scientific discovery—where do new ideas come from?—has long been denigrated by philosophers as irrelevant to analyzing the growth of scientific knowledge. In particular, little is known about how cognitive theories are discovered, and neither the classical accounts of discovery as eithe ..."
Abstract

Cited by 39 (11 self)
 Add to MetaCart
The study of scientific discovery—where do new ideas come from?—has long been denigrated by philosophers as irrelevant to analyzing the growth of scientific knowledge. In particular, little is known about how cognitive theories are discovered, and neither the classical accounts of discovery as either probabilistic induction (e.g., Reichenbach, 1938) or lucky guesses (e.g., Popper, 1959), nor the stock anecdotes about sudden “eureka ” moments deepen the insight into discovery. A heuristics approach is taken in this review, where heuristics are understood as strategies of discovery less general than a supposed unique logic of discovery but more general than lucky guesses. This article deals with how scientists’ tools shape theories of mind, in particular with how methods of statistical inference have turned into metaphors of mind. The toolstotheories heuristic explains the emergence of a broad range of cognitive theories, from the cognitive revolution of the 1960s up to the present, and it can be used to detect both limitations and new lines of development in current cognitive theories that investigate the mind as an “intuitive statistician.” Scientific inquiry can be viewed as “an ocean, continuous everywhere and without a break or division ” (Leibniz, 1690/1951, p. 73). Hans Reichenbach (1938) nonetheless divided this ocean into two great seas, the context of discovery and the context of justification. Philosophers, logicians,
Inference by eye: Confidence intervals and how to read pictures of data
 American Psychologist
, 2005
"... Wider use in psychology of confidence intervals (CIs), especially as error bars in figures, is a desirable development. However, psychologists seldom use CIs and may not understand them well. The authors discuss the interpretation of figures with error bars and analyze the relationship between CIs a ..."
Abstract

Cited by 27 (9 self)
 Add to MetaCart
Wider use in psychology of confidence intervals (CIs), especially as error bars in figures, is a desirable development. However, psychologists seldom use CIs and may not understand them well. The authors discuss the interpretation of figures with error bars and analyze the relationship between CIs and statistical significance testing. They propose 7 rules of eye to guide the inferential use of figures with error bars. These include general principles: Seek bars that relate directly to effects of interest, be sensitive to experimental design, and interpret the intervals. They also include guidelines for inferential interpretation of the overlap of CIs on independent group means. Wider use of interval estimation in psychology has the potential to improve research communication substantially. Inference by eye is the interpretation of graphically presented data. On first seeing Figure 1, what questions should spring to mind and what inferences are justified? We discuss figures with means and confidence intervals (CIs), and propose rules of eye to guide the interpretation of such figures. We believe it is timely to consider inference by eye because psychologists are now being encouraged to make greater use of CIs. Many who seek reform of psychologists ’ statistical practices advocate a change in emphasis from null hypothesis significance testing (NHST) to CIs, among other techniques
Psychology will be a much better science when we change the way we analyze data
 Current Directions in Psychological Science
, 1996
"... because I believed that within it dwelt some of the most fundamental and challenging problems of the extant sciences. Who could not be intrigued, for example, by the relation between consciousness and behavior, or the rules guiding interactions in social situations, or the processes that underlie de ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
because I believed that within it dwelt some of the most fundamental and challenging problems of the extant sciences. Who could not be intrigued, for example, by the relation between consciousness and behavior, or the rules guiding interactions in social situations, or the processes that underlie development from infancy to maturity? Today, in 1996, my fascination with these problems is undiminished. But I've developed a certain angst over the intervening thirtysomething years—a constant, nagging feeling that our field spends a lot of time spinning its wheels without really making all that much progress. This problem shows up in obvious ways—for instance, in the regularity with which findings seem not to replicate. It also shows up in subtler ways—for instance, one doesn't often hear Psychologists saying, "Well this problem is solved now; let's move on to the next one " (as, for example, Johannes Kepler must have said over three centuries ago, after he had cracked the problem of describing planetary motion). I've come to believe that at least part of this problem revolves around our tools—particularly the tools that we use in the critical domains of data analysis and data interpretation. What we do, I sometimes feel, is akin to trying to build a violin using a stone mallet and a chainsaw. The tooltotask fit is not all that good, and as a result, we wind up building a lot of poorquality violins. My purpose here is to elaborate on these issues. In what follows, I will summarize our major dataanalysis and datainterpretation tools, and describe what I believe to be amiss with them. I will then offer some suggestions for change.
Misinterpretations of Significance: A Problem Students Share with Their Teachers?
"... The use of significance tests in science has been debated from the invention of these tests until the present time. Apart from theoretical critiques on their appropriateness for evaluating scientific hypotheses, significance tests also receive criticism for inviting misinterpretations. We presented ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
The use of significance tests in science has been debated from the invention of these tests until the present time. Apart from theoretical critiques on their appropriateness for evaluating scientific hypotheses, significance tests also receive criticism for inviting misinterpretations. We presented six common misinterpretations to psychologists who work in German universities and found out that they are still surprisingly widespread – even among instructors who teach statistics to psychology students. Although these misinterpretations are well documented among students, until now there has been little research on pedagogical methods to remove them. Rather, they are considered “hard facts ” that are impervious to correction. We discuss the roots of these misinterpretations and propose a pedagogical concept to teach significance tests, which involves explaining the meaning of statistical significance in an appropriate way. 1.
The null ritual: What you always wanted to know about null hypothesis testing but were afraid to ask
 Handbook on Quantitative Methods in the Social Sciences. Sage, Thousand Oaks, CA
, 2004
"... No scientific worker has a fixed level of significance at which from year to year, and in all circumstances, he rejects hypotheses; he rather gives his mind to each particular case in the light of his evidence and his ideas. (Ronald A. Fisher, 1956, p. 42) It is tempting, if the only tool you have i ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
No scientific worker has a fixed level of significance at which from year to year, and in all circumstances, he rejects hypotheses; he rather gives his mind to each particular case in the light of his evidence and his ideas. (Ronald A. Fisher, 1956, p. 42) It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail. (A. H. Maslow, 1966, pp. 15–16) One of us once had a student who ran an experiment for his thesis. Let us call him Pogo. Pogo had an experimental group and a control group and found that the means of both groups were exactly the same. He believed it would be unscientific to simply state this result; he was anxious to do a significance test. The result of the test was that the two means did not differ significantly, which Pogo reported in his thesis. In 1962, Jacob Cohen reported that the experiments published in a major psychology journal had, on average, only a 50: 50 chance of detecting a mediumsized effect if there was one. That is, the statistical power was as low as 50%. This result was widely cited, but did it change researchers’ practice? Sedlmeier and Gigerenzer (1989) checked the studies in the same journal, 24 years later, a time period that should allow for change. Yet only 2 out of 64 researchers mentioned power,
Replications and extensions in marketing: Rarely published but quite contrary
 International Journal of Research in Marketing
, 1994
"... Replication is rare in marketing. Of 1,120 papers sampled from three major marketing journals, none were replications. Only 1.8 % of the papers were extensions, and they consumed 1.1 % of the journal space. On average, these extensions appeared seven years after the original study. The publication r ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
Replication is rare in marketing. Of 1,120 papers sampled from three major marketing journals, none were replications. Only 1.8 % of the papers were extensions, and they consumed 1.1 % of the journal space. On average, these extensions appeared seven years after the original study. The publication rate for such works has been decreasing since the 1970s. Published extensions typically produced results that conflicted with the original studies; of the 20 extensions published, 12 conflicted with the earlier results, and only 3 provided full confirmation. Published replications do not attract as many citations after publication as do the original studies, even when the results fail to support the original studies. 1.
20 STATISTICAL COGNITION: TOWARDS EVIDENCEBASED PRACTICE IN STATISTICS AND STATISTICS EDUCATION 4
"... Practitioners and teachers should be able to justify their chosen techniques by taking into account research results: This is evidencebased practice (EBP). We argue that, specifically, statistical practice and statistics education should be guided by evidence, and we propose statistical cognition ( ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Practitioners and teachers should be able to justify their chosen techniques by taking into account research results: This is evidencebased practice (EBP). We argue that, specifically, statistical practice and statistics education should be guided by evidence, and we propose statistical cognition (SC) as an integration of theory, research, and application to support EBP. SC is an interdisciplinary research field, and a way of thinking. We identify three facets of SC—normative, descriptive, and prescriptive— and discuss their mutual influences. Unfortunately, the three components are studied by somewhat separate groups of scholars, who publish in different journals. These separations impede the implementation of EBP. SC, however, integrates the facets and provides a basis for EBP in statistical practice and education.
Cumulative research knowledge and social policy formulation: the critical role of metaanalysis
 Psychology, Public Policy, and Law
, 1996
"... For many years, policymakers expressed increasing frustration with social science research. On every issue there were studies arguing for diametrically opposed conclusions. Methods of metaanalysis that correct for the effects of sampling error have shown that almost all such conflicting results wer ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
For many years, policymakers expressed increasing frustration with social science research. On every issue there were studies arguing for diametrically opposed conclusions. Methods of metaanalysis that correct for the effects of sampling error have shown that almost all such conflicting results were caused by sampling error. Furthermore, the effects of sampling error are greatly exaggerated by using significance test methodology. In many areas, metaanalysis has now provided dependable answers to the original research questions. Metaanalysis is now increasingly being used by policymakers, by textbook writers, and by theorists to provide the basic facts needed to draw both practical and explanatory conclusions. Sophisticated metaanalysis procedures are now used to correct for the effects of other study imperfections, such as measurement error, range restriction, and artificial dichotomization. In domains where the data on artifacts are available, the effect sizes in necessarily imperfect studies have been found to be considerably understated. Path analysis can be applied to the findings from metaanalysis to yield improved causal analyses that result in both explanation of results and improved generalization of