Results 1  10
of
57
Synaptic Basis of Cortical Persistent Activity: the Importance of NMDA Receptors to Working Memory
 J. Neurosci
, 1999
"... this paper I present a network model of spiking neurons in which synapses are endowed with realistic gating kinetics, based on experimentally measured dynamical properties of cortical synapses. I will focus on how delayperiod activity could be generated by neuronally plausible mechanisms; the issue ..."
Abstract

Cited by 103 (15 self)
 Add to MetaCart
this paper I present a network model of spiking neurons in which synapses are endowed with realistic gating kinetics, based on experimentally measured dynamical properties of cortical synapses. I will focus on how delayperiod activity could be generated by neuronally plausible mechanisms; the issue of memory field formation will be addressed in a separate study. A main problem to be investigated is that of "rate control" for a persistent state: if a robust persistent activity necessitates strong recurrent excitatory connections, how can the network be prevented from runaway excitation in spite of the powerful positive feedback, so that neuronal firing rates are low and comparable to those of PFC cells (10 50 Hz)? Moreover, a persistent state may be destabilized because of network dynamics. For example, fast recurrent excitation followed by a slower negative feedback may lead to network instability and a collapse of the persistent state. It is shown that persistent states at low firing rates are usually stable only in the presence of sufficiently slow excitatory synapses of the NMDA type. Functional implications of these results for the role of Received April 14, 1999; revised Aug. 12, 1999; accepted Aug. 12, 1999
Bayesian computation in recurrent neural circuits
 Neural Computation
, 2004
"... A large number of human psychophysical results have been successfully explained in recent years using Bayesian models. However, the neural implementation of such models remains largely unclear. In this paper, we show that a network architecture commonly used to model the cerebral cortex can implem ..."
Abstract

Cited by 59 (4 self)
 Add to MetaCart
A large number of human psychophysical results have been successfully explained in recent years using Bayesian models. However, the neural implementation of such models remains largely unclear. In this paper, we show that a network architecture commonly used to model the cerebral cortex can implement Bayesian inference for an arbitrary hidden Markov model. We illustrate the approach using an orientation discrimination task and a visual motion detection task. In the case of orientation discrimination, we show that the model network can infer the posterior distribution over orientations and correctly estimate stimulus orientation in the presence of significant noise. In the case of motion detection, we show that the resulting model network exhibits direction selectivity and correctly computes the posterior probabilities over motion direction and position. When used to solve the wellknown random dots motion discrimination task, the model generates responses that mimic the activities of evidenceaccumulating neurons in cortical areas LIP and FEF. The framework introduced in the paper posits a new interpretation of cortical activities in terms of log posterior probabilities of stimuli occurring in the natural world. 1 1
The Rectified Gaussian Distribution
 Advances in Neural Information Processing Systems 10
, 1998
"... A simple but powerful modification of the standard Gaussian distribution is studied. The variables of the rectified Gaussian are constrained to be nonnegative, enabling the use of nonconvex energy functions. Two multimodal examples, the competitive and cooperative distributions, illustrate the repre ..."
Abstract

Cited by 33 (2 self)
 Add to MetaCart
A simple but powerful modification of the standard Gaussian distribution is studied. The variables of the rectified Gaussian are constrained to be nonnegative, enabling the use of nonconvex energy functions. Two multimodal examples, the competitive and cooperative distributions, illustrate the representational power of the rectified Gaussian. Since the cooperative distribution can represent the translations of a pattern, it demonstrates the potential of the rectified Gaussian for modeling pattern manifolds. 1 INTRODUCTION The rectified Gaussian distribution is a modification of the standard Gaussian in which the variables are constrained to be nonnegative. This simple modification brings increased representational power, as illustrated by two multimodal examples of the rectified Gaussian, the competitive and the cooperative distributions. The modes of the competitive distribution are wellseparated by regions of low probability. The modes of the cooperative distribution are closely sp...
Learning continuous attractors in recurrent networks
 Advances in Neural Information Processing Systems
, 1998
"... One approach toinvariant object recognition employs a recurrent neural network as an associative memory. In the standard depiction of the network's state space, memories of objects are stored as attractive xed points of the dynamics. I argue for a modi cation of this picture: if an object has a cont ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
One approach toinvariant object recognition employs a recurrent neural network as an associative memory. In the standard depiction of the network's state space, memories of objects are stored as attractive xed points of the dynamics. I argue for a modi cation of this picture: if an object has a continuous family of instantiations, it should be represented by a continuous attractor. This idea is illustrated with a network that learns to complete patterns. To perform the task of lling in missing information, the network develops a continuous attractor that models the manifold from which the patterns are drawn. From a statistical viewpoint, the pattern completion task allows a formulation of unsupervised learning in terms of regression rather than density estimation. A classic approach toinvariant object recognition is to use a recurrent neural network as an associative memory[1]. In spite of the intuitive appeal and biological plausibility of this approach, it has largely been abandoned in practical applications.
A recurrent network model of somatosensory parametric working memory in the prefrontal cortex. Cereb Cortex
, 2003
"... A parametric working memory network stores the information of an analog stimulus in the form of persistent neural activity that is monotonically tuned to the stimulus. The family of persistent firing patterns with a continuous range of firing rates must all be realizable under exactly the same exter ..."
Abstract

Cited by 28 (4 self)
 Add to MetaCart
A parametric working memory network stores the information of an analog stimulus in the form of persistent neural activity that is monotonically tuned to the stimulus. The family of persistent firing patterns with a continuous range of firing rates must all be realizable under exactly the same external conditions (during the delay when the transient stimulus is withdrawn). How this can be accomplished by neural mechanisms remains an unresolved question. Here we present a recurrent cortical network model of irregularly spiking neurons that was designed to simulate a somatosensory working memory experiment with behaving monkeys. Our model reproduces the observed positively and negatively monotonic persistent activity, and heterogeneous tuning curves of memory activity. We show that finetuning mathematically corresponds to a precise alignment of cusps in the bifurcation diagram of the network. Moreover, we show that the finetuned network can integrate stimulus inputs over several seconds. Assuming that such time integration occurs in neural populations downstream from a tonically persistent neural population, our model is able to account for the slow rampingup and rampingdown behaviors of neurons observed in prefrontal cortex.
The Autapse: A Simple Illustration of ShortTerm Analog Memory Storage By Tuned Synaptic Feedback
, 2000
"... According to a popular hypothesis, shortterm memories are stored as persistent neural activity maintained by synaptic feedback loops. This hypothesis has been formulated mathematically in a number of recurrent network models. Here we study an abstraction of these models, a single neuron with a sy ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
According to a popular hypothesis, shortterm memories are stored as persistent neural activity maintained by synaptic feedback loops. This hypothesis has been formulated mathematically in a number of recurrent network models. Here we study an abstraction of these models, a single neuron with a synapse onto itself, or autapse. This abstraction cannot simulate the way in which persistent activity patterns are distributed over neural populations in the brain. However, with proper tuning of parameters, it does reproduce the continuously graded, or analog, nature of many examples of persistent activity. The conditions for tuning are derived for the dynamics of a conductancebased model neuron with a slow excitatory autapse. The derivation uses the method of averaging to approximate the spiking model with a nonspiking, reduced model. Shortterm analog memory storage is possible if the reduced model is approximately linear, and its feedforward bias and autapse strength are precisely...
Coordinate Transformations In The Visual System: How To Generate Gain Fields Andwhat To Compute With Them
 In Principles of Neural Ensemble and Distributed Coding in the Nervous System
, 2001
"... Introduction Studies of population coding, which explore how the activity of ensembles of neurons represent the external world, normally focus on the accuracy and reliability with which sensory information is represented. However, the encoding strategies used by neural circuits have undoubtedly bee ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Introduction Studies of population coding, which explore how the activity of ensembles of neurons represent the external world, normally focus on the accuracy and reliability with which sensory information is represented. However, the encoding strategies used by neural circuits have undoubtedly been shaped by the way the encoded information is used. The point of encoding sensory information is, after all, to generate and guide behavior. The ease and efficiency with which sensory information can be processed to generate motor responses must be an important factor in determining the nature of a neuronal population code. In other words, to understand how populations of neurons encode we cannot overlook how they compute. Gain modulation, which is seen in many cortical areas, is a change in the response amplitude of a neuron that is not accompanied by a modification of response selectivity. Just as population coding is a ubiquitous form of information representation, gain modulati
Existence and Stability of Standing Pulses in Neural Networks
 I. Existence. SIAM Journal on Applied Dynamical Systems
, 2003
"... Abstract. We analyze the stability of standing pulse solutions of a neural network integrodifferential equation. The network consists of a coarsegrained layer of neurons synaptically connected by lateral inhibition with a nonsaturating nonlinear gain function. When two standing singlepulse soluti ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Abstract. We analyze the stability of standing pulse solutions of a neural network integrodifferential equation. The network consists of a coarsegrained layer of neurons synaptically connected by lateral inhibition with a nonsaturating nonlinear gain function. When two standing singlepulse solutions coexist, the small pulse is unstable, and the large pulse is stable. The large single pulse is bistable with the “alloff ” state. This bistable localized activity may have strong implications for the mechanism underlying working memory. We show that dimple pulses have similar stability properties to large pulses but double pulses are unstable.