Results 1  10
of
52
Mutual information, Fisher information and population coding
 Neural Computation
, 1998
"... In the context of parameter estimation and model selection, it is only quite recently that a direct link between the Fisher information and information theoretic quantities has been exhibited. We give an interpretation of this link within the standard framework of information theory. We show that in ..."
Abstract

Cited by 61 (3 self)
 Add to MetaCart
In the context of parameter estimation and model selection, it is only quite recently that a direct link between the Fisher information and information theoretic quantities has been exhibited. We give an interpretation of this link within the standard framework of information theory. We show that in the context of population coding, the mutual information between the activity of a large array of neurons and a stimulus to which the neurons are tuned is naturally related to the Fisher information. In the light of this result we consider the optimization of the tuning curves parameters in the case of neurons responding to a stimulus represented by an angular variable. To appear in Neural Computation Vol. 10, Issue 7, published by the MIT press. 1 Laboratory associated with C.N.R.S. (U.R.A. 1306), ENS, and Universities Paris VI and Paris VII 1 Introduction A natural framework to study how neurons communicate, or transmit information, in the nervous system is information theory (see e...
Bayesian computation in recurrent neural circuits
 Neural Computation
, 2004
"... A large number of human psychophysical results have been successfully explained in recent years using Bayesian models. However, the neural implementation of such models remains largely unclear. In this paper, we show that a network architecture commonly used to model the cerebral cortex can implem ..."
Abstract

Cited by 59 (4 self)
 Add to MetaCart
A large number of human psychophysical results have been successfully explained in recent years using Bayesian models. However, the neural implementation of such models remains largely unclear. In this paper, we show that a network architecture commonly used to model the cerebral cortex can implement Bayesian inference for an arbitrary hidden Markov model. We illustrate the approach using an orientation discrimination task and a visual motion detection task. In the case of orientation discrimination, we show that the model network can infer the posterior distribution over orientations and correctly estimate stimulus orientation in the presence of significant noise. In the case of motion detection, we show that the resulting model network exhibits direction selectivity and correctly computes the posterior probabilities over motion direction and position. When used to solve the wellknown random dots motion discrimination task, the model generates responses that mimic the activities of evidenceaccumulating neurons in cortical areas LIP and FEF. The framework introduced in the paper posits a new interpretation of cortical activities in terms of log posterior probabilities of stimuli occurring in the natural world. 1 1
On Decoding the Responses of a Population of Neurons from Short Time Windows
, 1999
"... The effectiveness of various stimulus identification (decoding) procedures for extracting the information carried by the responses of a population of neurons to a set of repeatedly presented stimuli is studied analytically, in the limit of short time windows. It is shown that in this limit, the enti ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
The effectiveness of various stimulus identification (decoding) procedures for extracting the information carried by the responses of a population of neurons to a set of repeatedly presented stimuli is studied analytically, in the limit of short time windows. It is shown that in this limit, the entire information content of the responses can sometimes be decoded, and when this is not the case, the lost information is quantified. In particular, the mutual information extracted by taking into account only the most likely stimulus in each trial turns out to be, if not equal, much closer to the true value than that calculated from all the probabilities that each of the possible stimuli in the set was the actual one. The relation between the mutual information extracted by decoding and the percentage of correct stimulus decodings is also derived analytically in the same limit, showing that the metric content index can be estimated reliably from a few cells recorded from brief periods. Computer simulations as well as the activity of real neurons recorded in the primate hippocampus serve to confirm these results and illustrate the utility and limitations of the approach.
Neural Decoding of Cursor Motion Using a Kalman Filter
, 2003
"... The direct neural control of external devices such as computer displays or prosthetic limbs requires the accurate decoding of neural activity representing continuous movement. We develop a realtime control system using the spiking activity of approximately 40 neurons recorded with an electrode arra ..."
Abstract

Cited by 31 (11 self)
 Add to MetaCart
The direct neural control of external devices such as computer displays or prosthetic limbs requires the accurate decoding of neural activity representing continuous movement. We develop a realtime control system using the spiking activity of approximately 40 neurons recorded with an electrode array implanted in the arm area of primary motor cortex. In contrast to previous work, we develop a controltheoretic approach that explicitly models the motion of the hand and the probabilistic relationship between this motion and the mean firing rates of the cells in 70ms bins. We focus on a realistic cursor control task in which the subject must move a cursor to "hit" randomly placed targets on a computer monitor. Encoding and decoding of the neural data is achieved with a Kalman filter which has a number of advantages over previous linear filtering techniques. In particular, the Kalman filter reconstructions of hand trajectories in offline experiments are more accurate than previously reported results and the model provides insights into the nature of the neural coding of movement.
Commoninput models for multiple neural spiketrain data
 Data, Network: Comput. Neural Syst
, 2006
"... Recent developments in multielectrode recordings enable the simultaneous measurement of the spiking activity of many neurons. Analysis of such multineuronal data is one of the key challenges in computational neuroscience today. In this work, we develop a multivariate pointprocess model in which th ..."
Abstract

Cited by 30 (17 self)
 Add to MetaCart
Recent developments in multielectrode recordings enable the simultaneous measurement of the spiking activity of many neurons. Analysis of such multineuronal data is one of the key challenges in computational neuroscience today. In this work, we develop a multivariate pointprocess model in which the observed activity of a network of neurons depends on three terms: 1) the experimentallycontrolled stimulus; 2) the spiking history of the observed neurons; and 3) a latent noise source that corresponds, for example, to “common input ” from an unobserved population of neurons that is presynaptic to two or more cells in the observed population. We develop an expectationmaximization algorithm for fitting the model parameters; here the expectation step is based on a continuoustime implementation of the extended Kalman smoother, and the maximization step involves two concave maximization problems which may be solved in parallel. The techniques developed allow us to solve a variety of inference problems in a straightforward, computationally efficient fashion; for example, we may use the model to predict network activity given an arbitrary stimulus, infer a neuron’s firing rate given the stimulus and the activity of the other observed neurons, and perform optimal stimulus decoding and prediction. We present several detailed simulation studies which explore the strengths and limitations of our approach. 1
Neuronal Tuning: To Sharpen or Broaden?
, 1999
"... Sensory and motor variables are typically represented by a population of broadly tuned neurons. A coarser representation with broader tuning can often improve coding accuracy, but sometimes the accuracy may also improve with sharper tuning. The theoretical analysis here shows that the relationship b ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
Sensory and motor variables are typically represented by a population of broadly tuned neurons. A coarser representation with broader tuning can often improve coding accuracy, but sometimes the accuracy may also improve with sharper tuning. The theoretical analysis here shows that the relationship between tuning width and accuracy depends crucially on the dimension of the encoded variable. A general rule is derived for how the Fisher information scales with the tuning width, regardless of the exact shape of the tuning function, the probability distribution of spikes, and allowing some correlated noise between neurons. These results demonstrate a universal dimensionality effect in neural population coding.
A new look at statespace models for neural data
 Journal of Computational Neuroscience
, 2010
"... State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in statespace models with nonGaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these appro ..."
Abstract

Cited by 28 (19 self)
 Add to MetaCart
State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in statespace models with nonGaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these approximations, but that nonetheless retain the computational efficiency of the approximate methods. We discuss a variety of examples, applying these direct optimization techniques to problems in spike train smoothing, stimulus decoding, parameter estimation, and inference of synaptic properties. Along the way, we point out connections to some related standard statistical methods, including spline smoothing and isotonic regression. Finally, we note that the computational methods reviewed here do not in fact depend on the statespace setting at all; instead, the key property we are exploiting involves the bandedness of certain matrices. We close by discussing some applications of this more general point of view, including Markov chain Monte Carlo methods for neural decoding and efficient estimation of spatiallyvarying firing rates.
Dynamic Analyses of Information Encoding in Neural Ensembles
 Neural Computation
, 2004
"... Neural spike train decoding algorithms and techniques to compute Shannon
mutual information are important methods for analyzing how neural
systems represent biological signals.Decoding algorithms are also one of
several strategies being used to design controls for brainmachine interfaces.
Developin ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
Neural spike train decoding algorithms and techniques to compute Shannon
mutual information are important methods for analyzing how neural
systems represent biological signals.Decoding algorithms are also one of
several strategies being used to design controls for brainmachine interfaces.
Developing optimal strategies to desig n decoding algorithms and
compute mutual information are therefore important problems in computational
neuroscience. We present a general recursive lter decoding
algorithm based on a point process model of individual neuron spiking
activity and a linear stochastic statespace model of the biological signal.
We derive from the algorithm new instantaneous estimates of the entropy,
entropy rate, and the mutual information between the signal and
the ensemble spiking activity. We assess the accuracy of the algorithm
by computing, along with the decoding error, the true coverage probability
of the approximate 0.95 condence regions for the individual signal
estimates. We illustrate the new algorithm by reanalyzing the position
and ensemble neural spiking activity of CA1 hippocampal neurons from
two rats foraging in an open circular environment. We compare the performance
of this algorithm with a linear lter constructed by the widely
used reverse correlation method. The median decoding error for Animal
1 (2) during 10 minutes of open foraging was 5.9 (5.5) cm, the median
entropy was 6.9 (7.0) bits, the median information was 9.4 (9.4) bits, and
the true coverage probability for 0.95 condence regions was 0.67 (0.75)
using 34 (32) neurons. These ndings improve signicantly on our previous
results and suggest an integrated approach to dynamically reading
neural codes, measuring their properties, and quantifying the accuracy
with which encoded information is extracted.
MultiDimensional Encoding Strategy of Spiking Neurons
 Neural Computation
, 2000
"... Neural responses in sensory systems are typically triggered by a multitude of stimulus features. Using information theory, we study the encoding accuracy of a population of stochastically spiking neurons characterized by different tuning widths for the different features. The optimal encoding strate ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
Neural responses in sensory systems are typically triggered by a multitude of stimulus features. Using information theory, we study the encoding accuracy of a population of stochastically spiking neurons characterized by different tuning widths for the different features. The optimal encoding strategy for representing one feature most accurately consists of (i) narrow tuning in the dimension to be encoded to increase the singleneuron Fisher information, and (ii) broad tuning in all other dimensions to increase the number of active neurons. Extremely narrow tuning without sufficient receptive field overlap will severely worsen the coding. This implies the existence of an optimal tuning width for the feature to be encoded. Empirically, only a subset of all stimulus features will normally be accessible. In this case, relative encoding errors can be calculated which yield a criterion for the function of a neural population based on the measured tuning curves. 1 Introduction The question...
ErrorBackpropagation in Temporally Encoded Networks of Spiking Neurons
 Neurocomputing
, 2000
"... For a network of spiking neurons that encodes information in the timing of individual spiketimes, we derive a supervised learning rule, SpikeProp, akin to traditional errorbackpropagation and show how to overcome the discontinuities introduced by thresholding. With this algorithm, we demonstrate h ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
For a network of spiking neurons that encodes information in the timing of individual spiketimes, we derive a supervised learning rule, SpikeProp, akin to traditional errorbackpropagation and show how to overcome the discontinuities introduced by thresholding. With this algorithm, we demonstrate how networks of spiking neurons with biologically reasonable action potentials can perform complex nonlinear classification in fast temporal coding just as well as ratecoded networks. We perform experiments for the classical XORproblem, when posed in a temporal setting, as well as for a number of other benchmark datasets. Comparing the (implicit) number of spiking neurons required for the encoding of the interpolated XOR problem, it is demonstrated that temporal coding requires significantly less neurons than instantaneous ratecoding. 2000 Mathematics Subject Classification: 82C32, 68T05, 68T10, 68T30, 92B20. 1998 ACM Computing Classification System: C.1.3, F.1.1, I.2.6, I.5.1. Keywords...