Results 1  10
of
46
The Bayesian reader: Explaining word recognition as an optimal Bayesian decision process
 PSYCHOL. REV
"... This paper presents a theory of visual word recognition that assumes that, in the tasks of word identification, lexical decision and semantic categorization, human readers behave as optimal Bayesian decisionmakers. This leads to the development of a computational model of word recognition, the Baye ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
This paper presents a theory of visual word recognition that assumes that, in the tasks of word identification, lexical decision and semantic categorization, human readers behave as optimal Bayesian decisionmakers. This leads to the development of a computational model of word recognition, the Bayesian Reader. The Bayesian Reader successfully simulates some of the most significant data on human reading. The model accounts for the nature of the function relating wordfrequency to reaction time and identification threshold, the effects of neighborhood density and its interaction with frequency, and the variation in the pattern of neighborhood density effects seen in different experimental tasks. Both the general behavior of the model, and the way the model predicts different patterns of results in different tasks, follow entirely from the assumption that human readers approximate optimal Bayesian decisionmakers.
Inference, attention, and decision in a Bayesian neural architecture
 Advances in Neural Information Processing Systems 17
, 2005
"... We study the synthesis of neural coding, selective attention and perceptual decision making. A hierarchical neural architecture is proposed, which implements Bayesian integration of noisy sensory input and topdown attentional priors, leading to sound perceptual discrimination. The model offers an ex ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
We study the synthesis of neural coding, selective attention and perceptual decision making. A hierarchical neural architecture is proposed, which implements Bayesian integration of noisy sensory input and topdown attentional priors, leading to sound perceptual discrimination. The model offers an explicit explanation for the experimentally observed modulation that prior information in one stimulus feature (location) can have on an independent feature (orientation). The network’s intermediate levels of representation instantiate known physiological properties of visual cortical neurons. The model also illustrates a possible reconciliation of cortical and neuromodulatory representations of uncertainty. 1
Shortlist B: A Bayesian model of continuous speech recognition
 Psychological Review
, 2008
"... A Bayesian model of continuous speech recognition is presented. It is based on Shortlist (D. Norris, 1994; ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
A Bayesian model of continuous speech recognition is presented. It is based on Shortlist (D. Norris, 1994;
Hierarchical Bayesian inference in networks of spiking neurons
 Advances in Neural Information Processing Systems 17
, 2005
"... There is growing evidence from psychophysical and neurophysiological studies that the brain utilizes Bayesian principles for inference and decision making. An important open question is how Bayesian inference for arbitrary graphical models can be implemented in networks of spiking neurons. In this p ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
There is growing evidence from psychophysical and neurophysiological studies that the brain utilizes Bayesian principles for inference and decision making. An important open question is how Bayesian inference for arbitrary graphical models can be implemented in networks of spiking neurons. In this paper, we show that recurrent networks of noisy integrateandfire neurons can perform approximate Bayesian inference for dynamic and hierarchical graphical models. The membrane potential dynamics of neurons is used to implement belief propagation in the log domain. The spiking probability of a neuron is shown to approximate the posterior probability of the preferred state encoded by the neuron, given past inputs. We illustrate the model using two examples: (1) a motion detection network in which the spiking probability of a directionselective neuron becomes proportional to the posterior probability of motion in a preferred direction, and (2) a twolevel hierarchical network that produces attentional effects similar to those observed in visual cortical areas V2 and V4. The hierarchical model offers a new Bayesian interpretation of attentional modulation in V2 and V4. 1
What and where: A Bayesian inference theory of attention
, 2010
"... In the theoretical framework described in this thesis, attention is part of the inference process that solves the visual recognition problem of what is where. The theory proposes a computational role for attention and leads to a model that predicts some of its main properties at the level of psychop ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
In the theoretical framework described in this thesis, attention is part of the inference process that solves the visual recognition problem of what is where. The theory proposes a computational role for attention and leads to a model that predicts some of its main properties at the level of psychophysics and physiology. In our approach, the main goal of the visual system is to infer the identity and the position of objects in visual scenes: spatial attention emerges as a strategy to reduce the uncertainty in shape information while featurebased attention reduces the uncertainty in spatial information. Featural and spatial attention represent two distinct modes of a computational process solving the problem of recognizing and localizing objects, especially in difficult recognition tasks such as in cluttered natural scenes. We describe a specific computational model and relate it to the known functional anatomy of attention. We show that several wellknown attentional phenomena – including bottomup popout effects, multiplicative modulation of neuronal tuning
Bayesian inference in spiking neurons
 Adv. Neural Information Processing Systems (NIPS*04), vol 17
, 2004
"... We propose a new interpretation of spiking neurons as Bayesian integrators accumulating evidence over time about events in the external world or the body, and communicating to other neurons their certainties about these events. In this model, spikes signal the occurrence of new information, i.e. wha ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
We propose a new interpretation of spiking neurons as Bayesian integrators accumulating evidence over time about events in the external world or the body, and communicating to other neurons their certainties about these events. In this model, spikes signal the occurrence of new information, i.e. what cannot be predicted from the past activity. As a result, firing statistics are close to Poisson, albeit providing a deterministic representation of probabilities. We proceed to develop a theory of Bayesian inference in spiking neural networks, recurrent interactions implementing a variant of belief propagation. Many perceptual and motor tasks performed by the central nervous system are probabilistic, and can be described in a Bayesian framework [4, 3]. A few important but hidden properties, such as direction of motion, or appropriate motor commands, are inferred from many noisy, local and ambiguous sensory cues. These evidences are combined with priors about the sensory world and body. Importantly, because most of these inferences should
Fading memory and times series prediction in recurrent networks with different forms of plasticity
 Neural Networks
, 2007
"... We investigate how different forms of plasticity shape the dynamics and computational properties of simple recurrent spiking neural networks. In particular, we study the effect of combining two forms of neuronal plasticity: spike timing dependent plasticity (STDP) that changes synaptic strength and ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
We investigate how different forms of plasticity shape the dynamics and computational properties of simple recurrent spiking neural networks. In particular, we study the effect of combining two forms of neuronal plasticity: spike timing dependent plasticity (STDP) that changes synaptic strength and intrinsic plasticity (IP) that changes the excitability of individual neurons to maintain homeostasis of their activity. We find that the interaction of these forms of plasticity gives rise to interesting network dynamics characterized by a comparatively large number of stable limit cycles. We study the response of such networks to external input and find that they exhibit a fading memory of recent inputs. We then demonstrate that the combination of STDP and IP shapes the network structure and dynamics in ways that allow the discovery of patterns in input time series and lead to good performance in time series prediction. Our results underscore the importance of studying the interaction of different forms of plasticity on network behavior.
Dynamics of Attentional Selection Under Conflict: Toward a Rational Bayesian Account
"... The brain exhibits remarkable facility in exerting attentional control in most circumstances, but it also suffers apparent limitations in others. The authors ’ goal is to construct a rational account for why attentional control appears suboptimal under conditions of conflict and what this implies ab ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
The brain exhibits remarkable facility in exerting attentional control in most circumstances, but it also suffers apparent limitations in others. The authors ’ goal is to construct a rational account for why attentional control appears suboptimal under conditions of conflict and what this implies about the underlying computational principles. The formal framework used is based on Bayesian probability theory, which provides a convenient language for delineating the rationale and dynamics of attentional selection. The authors illustrate these issues with the Eriksen flanker task, a classical paradigm that explores the effects of competing sensory inputs on response tendencies. The authors show how 2 distinctly formulated models, based on compatibility bias and spatial uncertainty principles, can account for the behavioral data. They also suggest novel experiments that may differentiate these models. In addition, they elaborate a simplified model that approximates optimal computation and may map more directly onto the underlying neural machinery. This approximate model uses conflict monitoring, putatively mediated by the anterior cingulate cortex, as a proxy for compatibility representation. The authors also consider how this conflict information might be disseminated and used to control processing.
Visual adaptation: Neural, psychological and computational aspects
, 2007
"... The term visual adaptation describes the processes by which the visual system alters its operating properties in response to changes in the environment. These continual adjustments in sensory processing are diagnostic as to the computational principles underlying the neural coding of information and ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
The term visual adaptation describes the processes by which the visual system alters its operating properties in response to changes in the environment. These continual adjustments in sensory processing are diagnostic as to the computational principles underlying the neural coding of information and can have profound consequences for our perceptual experience. New physiological and psychophysical data, along with emerging statistical and computational models, make this an opportune time to bring together experimental and theoretical perspectives. Here, we discuss functional ideas about adaptation in the light of recent data and identify exciting directions for future research.
Bayesian Filtering in Spiking Neural Networks: Noise, Adaptation, and Multisensory Integration
, 2008
"... Neural Computation, In Press A key requirement facing organisms acting in uncertain dynamic environments is the realtime estimation and prediction of environmental states, based upon which effective actions can be selected. While it is becoming evident that organisms employ exact or approximate Bay ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Neural Computation, In Press A key requirement facing organisms acting in uncertain dynamic environments is the realtime estimation and prediction of environmental states, based upon which effective actions can be selected. While it is becoming evident that organisms employ exact or approximate Bayesian statistical calculations for these purposes, it is far less clear how these putative computations are implemented by neural networks in a strictly dynamic setting. In this work we make use of rigorous mathematical results from the theory of continuous time point process filtering, and show how optimal realtime state estimation and prediction may be implemented in a general setting using simple recurrent neural networks. The framework is applicable to many situations of common interest, including noisy observations, nonPoisson spike trains (incorporating adaptation), multisensory integration and state prediction. The optimal network properties are shown to relate to the statistical structure of the environment, and the benefits of adaptation are studied and explicitly demonstrated. Finally, we recover several existing results as appropriate limits of our general setting. 1