Results 1 
5 of
5
Bayesian generic priors for causal learning
 Psychological Review
, 2008
"... The article presents a Bayesian model of causal learning that incorporates generic priors—systematic assumptions about abstract properties of a system of cause–effect relations. The proposed generic priors for causal learning favor sparse and strong (SS) causes—causes that are few in number and high ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
The article presents a Bayesian model of causal learning that incorporates generic priors—systematic assumptions about abstract properties of a system of cause–effect relations. The proposed generic priors for causal learning favor sparse and strong (SS) causes—causes that are few in number and high in their individual powers to produce or prevent effects. The SS power model couples these generic priors with a causal generating function based on the assumption that unobservable causal influences on an effect operate independently (P. W. Cheng, 1997). The authors tested this and other Bayesian models, as well as leading nonnormative models, by fitting multiple data sets in which several parameters were varied parametrically across multiple types of judgments. The SS power model accounted for data concerning judgments of both causal strength and causal structure (whether a causal link exists). The model explains why human judgments of causal structure (relative to a Bayesian model lacking these generic priors) are influenced more by causal power and the base rate of the effect and less by sample size. Broader implications of the Bayesian framework for human learning are discussed.
Augmented RescorlaWagner and maximum likelihood estimation
 In B
, 2006
"... We show that linear generalizations of RescorlaWagner can perform Maximum Likelihood estimation of the parameters of all generative models for causal reasoning. Our approach involves augmenting variables to deal with conjunctions of causes, similar to the agumented model of Rescorla. Our results in ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
We show that linear generalizations of RescorlaWagner can perform Maximum Likelihood estimation of the parameters of all generative models for causal reasoning. Our approach involves augmenting variables to deal with conjunctions of causes, similar to the agumented model of Rescorla. Our results involve genericity assumptions on the distributions of causes. If these assumptions are violated, for example for the Cheng causal power theory, then we show that a linear RescorlaWagner can estimate the parameters of the model up to a nonlinear transformtion. Moreover, a nonlinear RescorlaWagner is able to estimate the parameters directly to within arbitrary accuracy. Previous results can be used to determine convergence and to estimate convergence rates. 1
A LATENT CAUSE THEORY OF CLASSICAL CONDITIONING
, 2006
"... Classical conditioning experiments probe what animals learn about their environment. This thesis presents an exploration of the latent cause theory of classical conditioning. According to the theory, animals assume that events within their environment are attributable to a latent cause. Learning is ..."
Abstract
 Add to MetaCart
(Show Context)
Classical conditioning experiments probe what animals learn about their environment. This thesis presents an exploration of the latent cause theory of classical conditioning. According to the theory, animals assume that events within their environment are attributable to a latent cause. Learning is interpreted as an attempt to recover the generative model that gave rise to these observed events. In this thesis, the latent cause theory is applied to three distinct areas of classical conditioning, in each case offering a novel account of empirical phenomena. In the first instance, the effects of inference over an uncertain latent cause model structure are explored. A key property of Bayesian structural inference is the tradeoff between the model complexity and data fidelity. Recognizing the equivalence between this tradeoff and the tradeoff between generalization and discrimination found in configural conditioning suggests a statistical account of these phenomena. By considering model simulations of a number of conditioning paradigms (including some not previously viewed as “configural”), behavioral signs that animals employ model complexity tradeoffs are revealed.
Submitted to Psychological Review. (First reviews 6/11/2007). Running head: Bayesian Causal Learning
"... We present a Bayesian model of causal learning that incorporates generic priors on distributions of weights representing potential powers to either produce or prevent an effect. These generic priors favor necessary and sufficient causes. The NS power model couples these priors with a causal generati ..."
Abstract
 Add to MetaCart
We present a Bayesian model of causal learning that incorporates generic priors on distributions of weights representing potential powers to either produce or prevent an effect. These generic priors favor necessary and sufficient causes. The NS power model couples these priors with a causal generating function derived from the power PC theory (Cheng, 1997). We test this and other alternative Bayesian models using the strategy of computational cognitive psychophysics, fitting multiple data sets in which several parameters are varied parametrically across multiple types of judgments. The NS power model accounts for a wide range of data concerning judgments of both causal strength (the power of a cause to produce or prevent an effect) and causal structure (whether or not a causal link exists). For both types of causal judgments, a generic prior favoring a cause that is jointly necessary and sufficient explains interactions involving causal direction (generative versus preventive causes). For structure judgments, an additional prior that a new candidate cause will be deterministic (i.e., sufficient or else ineffective) explains why people’s causal structure judgments are based primarily on causal power and the base rate of the effect, rather than sample size. Alternative Bayesian formulations that lack either causal power
1 Augmented RescorlaWagner and Maximum Likelihood estimation.
"... We show that linear generalizations of RescorlaWagner can perform Maximum Likelihood estimation of the parameters of all generative models for causal reasoning. Our approach involves augmenting variables to deal with conjunctions of causes, similar to the agumented model of Rescorla. Our results in ..."
Abstract
 Add to MetaCart
(Show Context)
We show that linear generalizations of RescorlaWagner can perform Maximum Likelihood estimation of the parameters of all generative models for causal reasoning. Our approach involves augmenting variables to deal with conjunctions of causes, similar to the agumented model of Rescorla. Our results involve genericity assumptions on the distributions of causes. If these assumptions are violated, for example for the Cheng causal power theory, then we show that a linear RescorlaWagner can estimate the parameters of the model up to a nonlinear transformtion. Moreover, a nonlinear RescorlaWagner is able to estimate the parameters directly to within arbitrary accuracy. Previous results can be used to determine convergence and to estimate convergence rates. 1