Results 11 - 20
of
25
A Theory of Reaction Time Distributions
, 2009
"... We develop a general theory of reaction time (RT) distributions in psychological experiments, deriving from the distribution of the quotient of two normal random variables, that of the task difficulty (top-down information), and that of the external evidence that becomes available to solve it (botto ..."
Abstract
- Add to MetaCart
(Show Context)
We develop a general theory of reaction time (RT) distributions in psychological experiments, deriving from the distribution of the quotient of two normal random variables, that of the task difficulty (top-down information), and that of the external evidence that becomes available to solve it (bottomup information). The theory provides a unified account of known changes in the shape of the distributions depending on properties of the task and of the participants, and it predicts additional changes that should be observed. A number of known properties of RT distributions are homogeneously accounted for by variations in the value of two easily interpretable parameters: the coefficients of variation of the two normal variables. The predictions of the theory are compared with those of multiple families of distributions that have been proposed to account for RTs, indicating our theory provides a significantly better account of experimental data. For this purpose, we provide comparisons with four large datasets across tasks and modalitities. Finally, we show how the theory links to neurobiological models of response latencies.
The Persistent Impact of Incidental Experience
"... As we perform daily activities—driving to work, unlocking the office door, grabbing a coffee cup—our actions seem automatic and preprogrammed. Nonetheless, routine, well-practiced behavior is continually modulated by incidental experience: in repetitive experimental tasks, recent (~4) trials reliabl ..."
Abstract
- Add to MetaCart
(Show Context)
As we perform daily activities—driving to work, unlocking the office door, grabbing a coffee cup—our actions seem automatic and preprogrammed. Nonetheless, routine, well-practiced behavior is continually modulated by incidental experience: in repetitive experimental tasks, recent (~4) trials reliably influence performance and action choice. Psychological theories downplay the significance of sequential effects, explaining them as rapidly decaying perturbations of behavior with no long-term consequences. We challenge this traditional perspective in two studies designed to probe the impact of more distant experience, finding evidence for effects spanning up to a thousand intermediate trials. We present a normative theory in which these persistent effects reflect optimal adaptation to a dynamic environment exhibiting varying rates of change. The theory predicts a heavy-tailed decaying influence of past experience, consistent with our data, and suggests that individual incidental experiences are Throughout our daily lives, we encounter an ongoing barrage of mundane stimuli that demand routine responses. This incidental experience forms the fabric of our interaction with the world. Clearly, the sum of this experience determines our behavior, but how long-lasting is the
CHAPTER ELEVEN Evaluation and Comparison of Computational Models
"... 2. Conceptual Overview of Model Evaluation and Comparison 288 ..."
piKeywords: Expectancy Valence model Bayesian hierarchical modeling Cognitive modeling
"... assumed to jointly determine choice behavior in the Iowa gambling task: weighing of wins versus losses, memory for past payoffs, and response consistency. In this article we explore the statistical properties of the Expectancy Valence model. We first demonstrate the difficulty of applying the model ..."
Abstract
- Add to MetaCart
assumed to jointly determine choice behavior in the Iowa gambling task: weighing of wins versus losses, memory for past payoffs, and response consistency. In this article we explore the statistical properties of the Expectancy Valence model. We first demonstrate the difficulty of applying the model on the level of a single participant, we then propose and implement a Bayesian hierarchical estimation procedure to coherently combine information from different participants, andwe finally apply the Bayesian estimation procedure to data from an experiment designed to provide a test of specific influence. © 2008 Elsevier Inc. All rights reserved. Every neuroscientist knows the tale of Phineas Gage, the railroad worker who suffered an unfortunate accident: in 1848, an explosion drove an iron rod straight through Gage’s frontal cortex. Although Gage miraculously survived the accident, the resultant brain trauma did cause a distinct change in his personality. Prior to the accident, Gage was capable and reliable, but after the accident he was described as impatient, stubborn, and impulsive. Gage was no longer able to plan ahead in order to achieve long–term goals.1 The symptoms of Phineas Gage are characteristic for patients with damage to the ventromedial prefrontal cortex (vmPFC). These patients often take irresponsible decisions and do not seem to learn from their mistakes. The observed real–life decision making deficits are not caused by low intelligence, as vmPFC patients generally perform adequately on standard IQ tests. In order to study the decision making behavior of clinical populations such as vmPFC patients under controlled conditions, Bechara and Damasio developed the now–famous ‘‘Iowa gambling
A Tutorial on Adaptive Design Optimization
, 2013
"... Experimentation is ubiquitous in the field of psychology and fundamental to the advancement of its science, and one of the biggest challenges for researchers is designing experiments that can conclusively discriminate the theoretical hypotheses or models under investigation. The recognition of this ..."
Abstract
- Add to MetaCart
Experimentation is ubiquitous in the field of psychology and fundamental to the advancement of its science, and one of the biggest challenges for researchers is designing experiments that can conclusively discriminate the theoretical hypotheses or models under investigation. The recognition of this challenge has led to the development of sophisticated statistical methods that aid in the design of experiments and that are within the reach of everyday experimental scientists. This tutorial paper introduces the reader to an implementable experimentation methodology, dubbed Adaptive Design Optimization, that can help scientists to conduct “smart ” experiments that are maximally informative and highly efficient, which in turn should accelerate scientific discovery in psychology and beyond. 1