Results 1  10
of
86
The Bayesian reader: Explaining word recognition as an optimal Bayesian decision process
 PSYCHOL. REV
"... This paper presents a theory of visual word recognition that assumes that, in the tasks of word identification, lexical decision and semantic categorization, human readers behave as optimal Bayesian decisionmakers. This leads to the development of a computational model of word recognition, the Baye ..."
Abstract

Cited by 65 (5 self)
 Add to MetaCart
This paper presents a theory of visual word recognition that assumes that, in the tasks of word identification, lexical decision and semantic categorization, human readers behave as optimal Bayesian decisionmakers. This leads to the development of a computational model of word recognition, the Bayesian Reader. The Bayesian Reader successfully simulates some of the most significant data on human reading. The model accounts for the nature of the function relating wordfrequency to reaction time and identification threshold, the effects of neighborhood density and its interaction with frequency, and the variation in the pattern of neighborhood density effects seen in different experimental tasks. Both the general behavior of the model, and the way the model predicts different patterns of results in different tasks, follow entirely from the assumption that human readers approximate optimal Bayesian decisionmakers.
Shortlist B: A Bayesian model of continuous speech recognition
 Psychological Review
, 2008
"... A Bayesian model of continuous speech recognition is presented. It is based on Shortlist (D. Norris, 1994; ..."
Abstract

Cited by 60 (1 self)
 Add to MetaCart
(Show Context)
A Bayesian model of continuous speech recognition is presented. It is based on Shortlist (D. Norris, 1994;
Inference, attention, and decision in a Bayesian neural architecture
 Advances in Neural Information Processing Systems 17
, 2005
"... We study the synthesis of neural coding, selective attention and perceptual decision making. A hierarchical neural architecture is proposed, which implements Bayesian integration of noisy sensory input and topdown attentional priors, leading to sound perceptual discrimination. The model offers an ex ..."
Abstract

Cited by 35 (5 self)
 Add to MetaCart
(Show Context)
We study the synthesis of neural coding, selective attention and perceptual decision making. A hierarchical neural architecture is proposed, which implements Bayesian integration of noisy sensory input and topdown attentional priors, leading to sound perceptual discrimination. The model offers an explicit explanation for the experimentally observed modulation that prior information in one stimulus feature (location) can have on an independent feature (orientation). The network’s intermediate levels of representation instantiate known physiological properties of visual cortical neurons. The model also illustrates a possible reconciliation of cortical and neuromodulatory representations of uncertainty. 1
What and where: A Bayesian inference theory of attention
, 2010
"... In the theoretical framework described in this thesis, attention is part of the inference process that solves the visual recognition problem of what is where. The theory proposes a computational role for attention and leads to a model that predicts some of its main properties at the level of psychop ..."
Abstract

Cited by 33 (5 self)
 Add to MetaCart
In the theoretical framework described in this thesis, attention is part of the inference process that solves the visual recognition problem of what is where. The theory proposes a computational role for attention and leads to a model that predicts some of its main properties at the level of psychophysics and physiology. In our approach, the main goal of the visual system is to infer the identity and the position of objects in visual scenes: spatial attention emerges as a strategy to reduce the uncertainty in shape information while featurebased attention reduces the uncertainty in spatial information. Featural and spatial attention represent two distinct modes of a computational process solving the problem of recognizing and localizing objects, especially in difficult recognition tasks such as in cluttered natural scenes. We describe a specific computational model and relate it to the known functional anatomy of attention. We show that several wellknown attentional phenomena – including bottomup popout effects, multiplicative modulation of neuronal tuning
Hierarchical bayesian inference in networks of spiking neurons
 Advances in Neural Information Processing Systems 17
, 2005
"... There is growing evidence from psychophysical and neurophysiological studies that the brain utilizes Bayesian principles for inference and decision making. An important open question is how Bayesian inference for arbitrary graphical models can be implemented in networks of spiking neurons. In this ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
(Show Context)
There is growing evidence from psychophysical and neurophysiological studies that the brain utilizes Bayesian principles for inference and decision making. An important open question is how Bayesian inference for arbitrary graphical models can be implemented in networks of spiking neurons. In this paper, we show that recurrent networks of noisy integrateandfire neurons can perform approximate Bayesian inference for dynamic and hierarchical graphical models. The membrane potential dynamics of neurons implements approximate belief propagation in the log domain. The spiking probability of a neuron is shown to approximate the posterior probability of the preferred state encoded by the neuron, given past inputs. We illustrate the model using two examples: (1) a motion detection network in which the spiking probability of a directionselective neuron becomes proportional to the posterior probability of motion in a preferred direction, and (2) a twolevel hierarchical network that produces attentional effects similar to those observed in visual cortical areas V2 and V4. The hierarchical model offers a new Bayesian interpretation of attentional modulation in V2 and V4. 1
Visual adaptation: Neural, psychological and computational aspects
, 2007
"... The term visual adaptation describes the processes by which the visual system alters its operating properties in response to changes in the environment. These continual adjustments in sensory processing are diagnostic as to the computational principles underlying the neural coding of information and ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
The term visual adaptation describes the processes by which the visual system alters its operating properties in response to changes in the environment. These continual adjustments in sensory processing are diagnostic as to the computational principles underlying the neural coding of information and can have profound consequences for our perceptual experience. New physiological and psychophysical data, along with emerging statistical and computational models, make this an opportune time to bring together experimental and theoretical perspectives. Here, we discuss functional ideas about adaptation in the light of recent data and identify exciting directions for future research.
Bayesian inference in spiking neurons
 Adv. Neural Information Processing Systems (NIPS*04), vol 17
, 2004
"... We propose a new interpretation of spiking neurons as Bayesian integrators accumulating evidence over time about events in the external world or the body, and communicating to other neurons their certainties about these events. In this model, spikes signal the occurrence of new information, i.e. wha ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
(Show Context)
We propose a new interpretation of spiking neurons as Bayesian integrators accumulating evidence over time about events in the external world or the body, and communicating to other neurons their certainties about these events. In this model, spikes signal the occurrence of new information, i.e. what cannot be predicted from the past activity. As a result, firing statistics are close to Poisson, albeit providing a deterministic representation of probabilities. We proceed to develop a theory of Bayesian inference in spiking neural networks, recurrent interactions implementing a variant of belief propagation. Many perceptual and motor tasks performed by the central nervous system are probabilistic, and can be described in a Bayesian framework [4, 3]. A few important but hidden properties, such as direction of motion, or appropriate motor commands, are inferred from many noisy, local and ambiguous sensory cues. These evidences are combined with priors about the sensory world and body. Importantly, because most of these inferences should
Belief propagation in networks of spiking neurons
 Neural Comput
, 2009
"... From a theoretical point of view, statistical inference is an attractive model of brain operation. However, it is unclear how to implement these inferential processes in neuronal networks. We offer a solution to this problem by showing in detailed simulations how the BeliefPropagation algorithm on ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
From a theoretical point of view, statistical inference is an attractive model of brain operation. However, it is unclear how to implement these inferential processes in neuronal networks. We offer a solution to this problem by showing in detailed simulations how the BeliefPropagation algorithm on a factor graph can be embedded in a network of spiking neurons. We use pools of spiking neurons as the function nodes of the factor graph. Each pool gathers ’messages ’ in the form of population activities from its input nodes and combines them through its network dynamics. The various output messages to be transmitted over the edges of the graph are each computed by a group of readout neurons that feed in their respective destination pools. We use this approach to implement two examples of factor graphs. The first example is drawn from coding theory. It models the transmission of signals through an unreliable channel and demonstrates the principles and generality of our network approach. The second, more applied example, is of a psychophysical mechanism in which visual cues are used to resolve hypotheses about the interpretation of an object’s shape and illumination. These two examples, and also a statistical analysis, all demonstrate good agreement between the performance of our networks and the direct numerical evaluation of beliefpropagation. 1 1
Dynamics of Attentional Selection Under Conflict: Toward a Rational Bayesian Account
"... The brain exhibits remarkable facility in exerting attentional control in most circumstances, but it also suffers apparent limitations in others. The authors ’ goal is to construct a rational account for why attentional control appears suboptimal under conditions of conflict and what this implies ab ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
The brain exhibits remarkable facility in exerting attentional control in most circumstances, but it also suffers apparent limitations in others. The authors ’ goal is to construct a rational account for why attentional control appears suboptimal under conditions of conflict and what this implies about the underlying computational principles. The formal framework used is based on Bayesian probability theory, which provides a convenient language for delineating the rationale and dynamics of attentional selection. The authors illustrate these issues with the Eriksen flanker task, a classical paradigm that explores the effects of competing sensory inputs on response tendencies. The authors show how 2 distinctly formulated models, based on compatibility bias and spatial uncertainty principles, can account for the behavioral data. They also suggest novel experiments that may differentiate these models. In addition, they elaborate a simplified model that approximates optimal computation and may map more directly onto the underlying neural machinery. This approximate model uses conflict monitoring, putatively mediated by the anterior cingulate cortex, as a proxy for compatibility representation. The authors also consider how this conflict information might be disseminated and used to control processing.
Fading memory and times series prediction in recurrent networks with different forms of plasticity
 Neural Networks
, 2007
"... We investigate how different forms of plasticity shape the dynamics and computational properties of simple recurrent spiking neural networks. In particular, we study the effect of combining two forms of neuronal plasticity: spike timing dependent plasticity (STDP) that changes synaptic strength and ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
(Show Context)
We investigate how different forms of plasticity shape the dynamics and computational properties of simple recurrent spiking neural networks. In particular, we study the effect of combining two forms of neuronal plasticity: spike timing dependent plasticity (STDP) that changes synaptic strength and intrinsic plasticity (IP) that changes the excitability of individual neurons to maintain homeostasis of their activity. We find that the interaction of these forms of plasticity gives rise to interesting network dynamics characterized by a comparatively large number of stable limit cycles. We study the response of such networks to external input and find that they exhibit a fading memory of recent inputs. We then demonstrate that the combination of STDP and IP shapes the network structure and dynamics in ways that allow the discovery of patterns in input time series and lead to good performance in time series prediction. Our results underscore the importance of studying the interaction of different forms of plasticity on network behavior.