Results 11  20
of
73
A Bayesian view of covariation assessment
, 2007
"... When participants assess the relationship between two variables, each with levels of presence and absence, the two most robust phenomena are that: (a) observing the joint presence of the variables has the largest impact on judgment and observing joint absence has the smallest impact, and (b) partici ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
When participants assess the relationship between two variables, each with levels of presence and absence, the two most robust phenomena are that: (a) observing the joint presence of the variables has the largest impact on judgment and observing joint absence has the smallest impact, and (b) participants’ prior beliefs about the variables ’ relationship influence judgment. Both phenomena represent departures from the traditional normative model (the phi coefficient or related measures) and have therefore been interpreted as systematic errors. However, both phenomena are consistent with a Bayesian approach to the task. From a Bayesian perspective: (a) joint presence is normatively more informative than joint absence if the presence of variables is rarer than their absence, and (b) failing to incorporate prior beliefs is a normative error. Empirical evidence is reported showing that joint absence is seen as more informative than joint presence when it is clear that absence of the variables, rather than their presence, is rare.
A Probabilistic Model of Syntactic and Semantic Acquisition from ChildDirected Utterances and their Meanings
"... This paper presents an incremental probabilistic learner that models the acquistion of syntax and semantics from a corpus of childdirected utterances paired with possible representations of their meanings. These meaning representations approximate the contextual input available to the child; they d ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
This paper presents an incremental probabilistic learner that models the acquistion of syntax and semantics from a corpus of childdirected utterances paired with possible representations of their meanings. These meaning representations approximate the contextual input available to the child; they do not specify the meanings of individual words or syntactic derivations. The learner then has to infer the meanings and syntactic properties of the words in the input along with a parsing model. We use the CCG grammatical framework and train a nonparametric Bayesian model of parse structure with online variational Bayesian expectation maximization. When tested on utterances from the CHILDES corpus, our learner outperforms a stateoftheart semantic parser. In addition, it models such aspects of child acquisition as “fast mapping,” while also countering previous criticisms of statistical syntactic learners. 1
The Role of Causality in Judgment Under Uncertainty
"... Leading accounts of judgment under uncertainty evaluate performance within purely statistical frameworks, holding people to the standards of classical Bayesian (Tversky & Kahneman, 1974) or frequentist (Gigerenzer & Hoffrage, 1995) norms. We argue that these frameworks have limited ability to explai ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Leading accounts of judgment under uncertainty evaluate performance within purely statistical frameworks, holding people to the standards of classical Bayesian (Tversky & Kahneman, 1974) or frequentist (Gigerenzer & Hoffrage, 1995) norms. We argue that these frameworks have limited ability to explain the success and flexibility of people's realworld judgments, and propose an alternative normative framework based on Bayesian inferences over causal models. Deviations from traditional norms of judgment, such as "baserate neglect", may then be explained in terms of a mismatch between the statistics given to people and the causal models they intuitively construct to support probabilistic reasoning. Four experiments show that when a clear mapping can be established from given statistics to the parameters of an intuitive causal model, people are more likely to use the statistics appropriately, and that when the classical and causal Bayesian norms differ in their prescriptions, people's judgments are more consistent with causal Bayesian norms.
2006 “A Bayesian Approach to Diffusion Models of DecisionMaking and Response Time” NIPS
"... We present a computational Bayesian approach for Wiener diffusion models, which are prominent accounts of response time distributions in decisionmaking. We first develop a general closedform analytic approximation to the response time distributions for onedimensional diffusion processes, and deri ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We present a computational Bayesian approach for Wiener diffusion models, which are prominent accounts of response time distributions in decisionmaking. We first develop a general closedform analytic approximation to the response time distributions for onedimensional diffusion processes, and derive the required Wiener diffusion as a special case. We use this result to undertake Bayesian modeling of benchmark data, using posterior sampling to draw inferences about the interesting psychological parameters. With the aid of the benchmark data, we show the Bayesian account has several advantages, including dealing naturally with the parameter variation needed to account for some key features of the data, and providing quantitative measures to guide decisions about model construction. 1
The role of causal models in analogical inference
 Journal of Experimental Psychology: Learning, Memory and Cognition
, 2008
"... Computational models of analogy have assumed that the strength of an inductive inference about the target is based directly on similarity of the analogs and in particular on shared higher order relations. In contrast, work in philosophy of science suggests that analogical inference is also guided by ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Computational models of analogy have assumed that the strength of an inductive inference about the target is based directly on similarity of the analogs and in particular on shared higher order relations. In contrast, work in philosophy of science suggests that analogical inference is also guided by causal models of the source and target. In 3 experiments, the authors explored the possibility that people may use causal models to assess the strength of analogical inferences. Experiments 1–2 showed that reducing analogical overlap by eliminating a shared causal relation (a preventive cause present in the source) from the target increased inductive strength even though it decreased similarity of the analogs. These findings were extended in Experiment 3 to crossdomain analogical inferences based on correspondences between higher order causal relations. Analogical inference appears to be mediated by building and then running a causal model. The implications of the present findings for theories of both analogy and causal inference are discussed.
Categorization as nonparametric Bayesian density estimation
"... Rational models of cognition aim to explain the structure of human thought and behavior as an optimal solution to the computational problems that are posed by our environment ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Rational models of cognition aim to explain the structure of human thought and behavior as an optimal solution to the computational problems that are posed by our environment
Learning from actions and their consequences: Inferring causal variables from continuous sequences of human action
 Proc. of the 31st Annual Conference of the Cognitive Science Society
, 2009
"... In the real world causal variables do not come preidentified or occur in isolation, but instead are imbedded within a continuous temporal stream of events. A challenge faced by both human learners and machine learning algorithms is identifying subsequences that correspond to the appropriate variabl ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
In the real world causal variables do not come preidentified or occur in isolation, but instead are imbedded within a continuous temporal stream of events. A challenge faced by both human learners and machine learning algorithms is identifying subsequences that correspond to the appropriate variables for causal inference. A specific instance of this problem is action segmentation: dividing a sequence of observed behavior into meaningful actions, and determining which of those actions lead to effects in the world. Here we present two experiments investigating human action segmentation and causal inference, as well as a Bayesian analysis of how statistical and causal cues to segmentation should optimally be combined. We find that both adults and our model are sensitive to statistical regularities and causal structure in continuous action, and are able to combine these sources of information in order to correctly infer both causal relationships and segmentation boundaries.
Learning Causal Structure from Reasoning
"... According to the transitive dynamics model, people can construct causal structures by linking together configurations of force. The predictions of the model were tested in two experiments in which participants generated new causal relationships by chaining together two (Experiment 1) or three (Exper ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
According to the transitive dynamics model, people can construct causal structures by linking together configurations of force. The predictions of the model were tested in two experiments in which participants generated new causal relationships by chaining together two (Experiment 1) or three (Experiment 2) causal relations. The predictions of the transitive dynamics model were compared against those of Goldvarg and JohnsonLaird’s model theory (Goldvarg & JohnsonLaird, 2001). The transitive dynamics model consistently predicted the overall causal relationship drawn by participants for both types of causal chains, and, when compared to the model theory, provided a better fit to the data. The results suggest that certain kinds of causal reasoning may depend on force dynamic—rather than on purely logical or statistical—representations.
Learning to learn causal models
"... Learning to understand a single causal system can be an achievement, but humans must learn about multiple causal systems over the course of a lifetime. We present a hierarchical Bayesian framework that helps to explain how learning about several causal systems can accelerate learning about systems t ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Learning to understand a single causal system can be an achievement, but humans must learn about multiple causal systems over the course of a lifetime. We present a hierarchical Bayesian framework that helps to explain how learning about several causal systems can accelerate learning about systems that are subsequently encountered. Given experience with a set of objects our framework learns a causal model for each object and a causal schema that captures commonalities among these causal models. The schema organizes the objects into categories and specifies the causal powers and characteristic features of these categories and the characteristic causal interactions between categories. A schema of this kind allows causal models for subsequent objects to be rapidly learned, and we explore this accelerated learning in four experiments. Our results confirm that humans learn rapidly about the causal powers of novel objects, and we show that our framework accounts better for our data than alternative models of causal learning.