• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

Transcranial Direct Current Stimulation of Right Dorsolateral Prefrontal Cortex Does Not Affect Model-Based or Model-Free Reinforcement Learning in Humans. PloS one 9:e86850 (2014)

by P mittenaar, G Prichard, FitzGerald TH, J Diedrichsen, Dolan RJ
Add To MetaCart

Tools

Sorted by:
Results 1 - 2 of 2

Crowdsourcing for cognitive science -the utility of smartphones

by Harriet R ; Brown , Peter ; Zeidman , Peter ; Smittenaar , Rick A ; Adams , Fiona ; Mcnab , Robb B ; Rutledge , Raymond J Dolan
"... ..."
Abstract - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...ormance was greater for the ‘‘no distraction’’ condition, although the difference did not reach significance (t20 = 1.87, p = 0.076, Cohen’s d = 0.370). Selective stop-signal task Our data satisfy a prediction of the independent race model [11], the most widely used method for analysis of stop-signal data: stopFail RTs are shorter than Go RTs and thus represent the fast part of the entire Go RT distribution (stopFail RT , Go RT, t10772 = 57.8, p,0.001, Cohen’s d = 0.56). The effect size was considerably lower than that collected in a similar task under laboratory conditions (Cohen’s d = 1.81) [12], possibly reflecting the small number of data points from which the RT measures were derived. We calculated the stop-signal reaction time (SSRT) using the quantile method, which is a robust approach that accounts for inter-individual variability in probability of successful stopping. The SSRT was relatively high compared to the literature (mean (SD): 361.9 (67.7) ms) (fig. 3), indicating participants were relatively slow to inhibit their responses. This potentially reflects the lack of training in our participants, or the uncontrolled environment in which the task was performed. However, the ...

RESEARCH ARTICLE Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task

by Thomas Akam, Rui Costa, Peter Dayan
"... The recently developed ‘two-step ’ behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread ad ..."
Abstract - Add to MetaCart
The recently developed ‘two-step ’ behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine con-straints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects ’ investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifica-tions to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing suc-cessive trials can lead to erroneous conclusions. We confirm the power of a suggested cor-rection to the analysis that can alleviate this problem. We then consider model-free
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University