Results 1 
4 of
4
Bayesian generic priors for causal learning
 Psychological Review
, 2008
"... The article presents a Bayesian model of causal learning that incorporates generic priors—systematic assumptions about abstract properties of a system of cause–effect relations. The proposed generic priors for causal learning favor sparse and strong (SS) causes—causes that are few in number and high ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
The article presents a Bayesian model of causal learning that incorporates generic priors—systematic assumptions about abstract properties of a system of cause–effect relations. The proposed generic priors for causal learning favor sparse and strong (SS) causes—causes that are few in number and high in their individual powers to produce or prevent effects. The SS power model couples these generic priors with a causal generating function based on the assumption that unobservable causal influences on an effect operate independently (P. W. Cheng, 1997). The authors tested this and other Bayesian models, as well as leading nonnormative models, by fitting multiple data sets in which several parameters were varied parametrically across multiple types of judgments. The SS power model accounted for data concerning judgments of both causal strength and causal structure (whether a causal link exists). The model explains why human judgments of causal structure (relative to a Bayesian model lacking these generic priors) are influenced more by causal power and the base rate of the effect and less by sample size. Broader implications of the Bayesian framework for human learning are discussed.
Augmented RescorlaWagner and maximum likelihood estimation
 In B
, 2006
"... We show that linear generalizations of RescorlaWagner can perform Maximum Likelihood estimation of the parameters of all generative models for causal reasoning. Our approach involves augmenting variables to deal with conjunctions of causes, similar to the agumented model of Rescorla. Our results in ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We show that linear generalizations of RescorlaWagner can perform Maximum Likelihood estimation of the parameters of all generative models for causal reasoning. Our approach involves augmenting variables to deal with conjunctions of causes, similar to the agumented model of Rescorla. Our results involve genericity assumptions on the distributions of causes. If these assumptions are violated, for example for the Cheng causal power theory, then we show that a linear RescorlaWagner can estimate the parameters of the model up to a nonlinear transformtion. Moreover, a nonlinear RescorlaWagner is able to estimate the parameters directly to within arbitrary accuracy. Previous results can be used to determine convergence and to estimate convergence rates. 1
A LATENT CAUSE THEORY OF CLASSICAL CONDITIONING
, 2006
"... Classical conditioning experiments probe what animals learn about their environment. This thesis presents an exploration of the latent cause theory of classical conditioning. According to the theory, animals assume that events within their environment are attributable to a latent cause. Learning is ..."
Abstract
 Add to MetaCart
Classical conditioning experiments probe what animals learn about their environment. This thesis presents an exploration of the latent cause theory of classical conditioning. According to the theory, animals assume that events within their environment are attributable to a latent cause. Learning is interpreted as an attempt to recover the generative model that gave rise to these observed events. In this thesis, the latent cause theory is applied to three distinct areas of classical conditioning, in each case offering a novel account of empirical phenomena. In the first instance, the effects of inference over an uncertain latent cause model structure are explored. A key property of Bayesian structural inference is the tradeoff between the model complexity and data fidelity. Recognizing the equivalence between this tradeoff and the tradeoff between generalization and discrimination found in configural conditioning suggests a statistical account of these phenomena. By considering model simulations of a number of conditioning paradigms (including some not previously viewed as “configural”), behavioral signs that animals employ model complexity tradeoffs are revealed.
1 Augmented RescorlaWagner and Maximum Likelihood estimation.
"... We show that linear generalizations of RescorlaWagner can perform Maximum Likelihood estimation of the parameters of all generative models for causal reasoning. Our approach involves augmenting variables to deal with conjunctions of causes, similar to the agumented model of Rescorla. Our results in ..."
Abstract
 Add to MetaCart
We show that linear generalizations of RescorlaWagner can perform Maximum Likelihood estimation of the parameters of all generative models for causal reasoning. Our approach involves augmenting variables to deal with conjunctions of causes, similar to the agumented model of Rescorla. Our results involve genericity assumptions on the distributions of causes. If these assumptions are violated, for example for the Cheng causal power theory, then we show that a linear RescorlaWagner can estimate the parameters of the model up to a nonlinear transformtion. Moreover, a nonlinear RescorlaWagner is able to estimate the parameters directly to within arbitrary accuracy. Previous results can be used to determine convergence and to estimate convergence rates. 1