Results 1 
7 of
7
An amorphous model for morphological processing in visual comprehension based on naive discriminative learning
 Pscychological Review
, 2011
"... A twolayer symbolic network model based on the equilibrium equations of the RescorlaWagner model (Danks, 2003) is proposed. The study starts by presenting two experiments in Serbian, which reveal for sentential reading the inflectional paradigmatic effects previously observed by Milin, Filipović ..."
Abstract

Cited by 39 (13 self)
 Add to MetaCart
(Show Context)
A twolayer symbolic network model based on the equilibrium equations of the RescorlaWagner model (Danks, 2003) is proposed. The study starts by presenting two experiments in Serbian, which reveal for sentential reading the inflectional paradigmatic effects previously observed by Milin, Filipović Durdević, and Moscoso del Prado Mart́ın (2009) for unprimed lexical decision. The empirical results are successfully modeled without having to assume separate representations for inflections or data structures such as inflectional paradigms. In the next step, the same naive discriminative learning approach is pitted against a wide range of effects documented in the morphological processing literature. Frequency effects for complex words as well as for phrases (Arnon & Snider, 2010) emerge in the model without the presence of wholeword or wholephrase representations. Family size effects (Schreuder & Baayen, 1997; Moscoso del Prado Mart́ın, Bertram, Häikiö, Schreuder, & Baayen, 2004) emerge in the simulations across simple words, derived words, and compounds, without derived words or compounds being represented as such. It is shown that for pseudoderived words no special morphoorthographic segmentation mechanism as posited by Rastle, Davis, and New (2004) is required. The model also replicates the finding of Plag and Baayen (2009), that, on average, words with more productive affixes elicit longer response latencies, while at the same time predicting that productive affixes afford faster response latencies for new words. English phrasal paradigmatic effects modulating isolated word reading are reported and modelled, showing that the paradigmatic effects characterizing Serbian case inflection have crosslinguistic scope.
Bayesian generic priors for causal learning
 Psychological Review
, 2008
"... The article presents a Bayesian model of causal learning that incorporates generic priors—systematic assumptions about abstract properties of a system of cause–effect relations. The proposed generic priors for causal learning favor sparse and strong (SS) causes—causes that are few in number and high ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
The article presents a Bayesian model of causal learning that incorporates generic priors—systematic assumptions about abstract properties of a system of cause–effect relations. The proposed generic priors for causal learning favor sparse and strong (SS) causes—causes that are few in number and high in their individual powers to produce or prevent effects. The SS power model couples these generic priors with a causal generating function based on the assumption that unobservable causal influences on an effect operate independently (P. W. Cheng, 1997). The authors tested this and other Bayesian models, as well as leading nonnormative models, by fitting multiple data sets in which several parameters were varied parametrically across multiple types of judgments. The SS power model accounted for data concerning judgments of both causal strength and causal structure (whether a causal link exists). The model explains why human judgments of causal structure (relative to a Bayesian model lacking these generic priors) are influenced more by causal power and the base rate of the effect and less by sample size. Broader implications of the Bayesian framework for human learning are discussed.
Augmented RescorlaWagner and maximum likelihood estimation
 In B
, 2006
"... We show that linear generalizations of RescorlaWagner can perform Maximum Likelihood estimation of the parameters of all generative models for causal reasoning. Our approach involves augmenting variables to deal with conjunctions of causes, similar to the agumented model of Rescorla. Our results in ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
We show that linear generalizations of RescorlaWagner can perform Maximum Likelihood estimation of the parameters of all generative models for causal reasoning. Our approach involves augmenting variables to deal with conjunctions of causes, similar to the agumented model of Rescorla. Our results involve genericity assumptions on the distributions of causes. If these assumptions are violated, for example for the Cheng causal power theory, then we show that a linear RescorlaWagner can estimate the parameters of the model up to a nonlinear transformtion. Moreover, a nonlinear RescorlaWagner is able to estimate the parameters directly to within arbitrary accuracy. Previous results can be used to determine convergence and to estimate convergence rates. 1
1 Augmented RescorlaWagner and Maximum Likelihood estimation.
"... We show that linear generalizations of RescorlaWagner can perform Maximum Likelihood estimation of the parameters of all generative models for causal reasoning. Our approach involves augmenting variables to deal with conjunctions of causes, similar to the agumented model of Rescorla. Our results in ..."
Abstract
 Add to MetaCart
(Show Context)
We show that linear generalizations of RescorlaWagner can perform Maximum Likelihood estimation of the parameters of all generative models for causal reasoning. Our approach involves augmenting variables to deal with conjunctions of causes, similar to the agumented model of Rescorla. Our results involve genericity assumptions on the distributions of causes. If these assumptions are violated, for example for the Cheng causal power theory, then we show that a linear RescorlaWagner can estimate the parameters of the model up to a nonlinear transformtion. Moreover, a nonlinear RescorlaWagner is able to estimate the parameters directly to within arbitrary accuracy. Previous results can be used to determine convergence and to estimate convergence rates. 1
Submitted to Psychological Review. (First reviews 6/11/2007). Running head: Bayesian Causal Learning
"... We present a Bayesian model of causal learning that incorporates generic priors on distributions of weights representing potential powers to either produce or prevent an effect. These generic priors favor necessary and sufficient causes. The NS power model couples these priors with a causal generati ..."
Abstract
 Add to MetaCart
We present a Bayesian model of causal learning that incorporates generic priors on distributions of weights representing potential powers to either produce or prevent an effect. These generic priors favor necessary and sufficient causes. The NS power model couples these priors with a causal generating function derived from the power PC theory (Cheng, 1997). We test this and other alternative Bayesian models using the strategy of computational cognitive psychophysics, fitting multiple data sets in which several parameters are varied parametrically across multiple types of judgments. The NS power model accounts for a wide range of data concerning judgments of both causal strength (the power of a cause to produce or prevent an effect) and causal structure (whether or not a causal link exists). For both types of causal judgments, a generic prior favoring a cause that is jointly necessary and sufficient explains interactions involving causal direction (generative versus preventive causes). For structure judgments, an additional prior that a new candidate cause will be deterministic (i.e., sufficient or else ineffective) explains why people’s causal structure judgments are based primarily on causal power and the base rate of the effect, rather than sample size. Alternative Bayesian formulations that lack either causal power
A LATENT CAUSE THEORY OF CLASSICAL CONDITIONING
, 2006
"... Classical conditioning experiments probe what animals learn about their environment. This thesis presents an exploration of the latent cause theory of classical conditioning. According to the theory, animals assume that events within their environment are attributable to a latent cause. Learning is ..."
Abstract
 Add to MetaCart
Classical conditioning experiments probe what animals learn about their environment. This thesis presents an exploration of the latent cause theory of classical conditioning. According to the theory, animals assume that events within their environment are attributable to a latent cause. Learning is interpreted as an attempt to recover the generative model that gave rise to these observed events. In this thesis, the latent cause theory is applied to three distinct areas of classical conditioning, in each case offering a novel account of empirical phenomena. In the first instance, the effects of inference over an uncertain latent cause model structure are explored. A key property of Bayesian structural inference is the tradeoff between the model complexity and data fidelity. Recognizing the equivalence between this tradeoff and the tradeoff between generalization and discrimination found in configural conditioning suggests a statistical account of these phenomena. By considering model simulations of a number of conditioning paradigms (including some not previously viewed as “configural”), behavioral signs that animals employ model complexity tradeoffs are revealed.
SEE PROFILE
"... All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately. ..."
Abstract
 Add to MetaCart
(Show Context)
All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately.