Results 1  10
of
153
Predicting Every Spike: A model for the Responses of visual neurons
, 2001
"... second precision on subsequent generate highly variable spike trains because they re stimulus repeats. Here we develop a mathematical ceive large numbers of unsynchronized synaptic inputs description of the firing process that, given the recent and because many of these are not under experimental v ..."
Abstract

Cited by 109 (2 self)
 Add to MetaCart
second precision on subsequent generate highly variable spike trains because they re stimulus repeats. Here we develop a mathematical ceive large numbers of unsynchronized synaptic inputs description of the firing process that, given the recent and because many of these are not under experimental visual input, accurately predicts the timing of individ control. By contrast, neurons in the early visual system ual spikes. The formalism is successful in matching the from the retina to the lateral geniculate nucleus (LGN) spike trains from retinal ganglion cells in salamander, to area V1can deliver remarkably reproducible spike rabbit, and cat, as well as from lateral geniculate nu trains, whose trialtotrial variability is clearly lower than cleus neurons in cat. It adapts to many different re predicted from the simple firing rate formalism (Berry et sponse typ
Analyzing neural responses to natural signals: Maximally informatiove dimensions
 in Advances in Neural Information Processing 15 edited by
, 2004
"... We propose a method that allows for a rigorous statistical analysis of neural responses to natural stimuli which are non–Gaussian and exhibit strong correlations. We have in mind a model in which neurons are selective for a small number of stimulus dimensions out of a high dimensional stimulus space ..."
Abstract

Cited by 71 (13 self)
 Add to MetaCart
(Show Context)
We propose a method that allows for a rigorous statistical analysis of neural responses to natural stimuli which are non–Gaussian and exhibit strong correlations. We have in mind a model in which neurons are selective for a small number of stimulus dimensions out of a high dimensional stimulus space, but within this subspace the responses can be arbitrarily nonlinear. Existing analysis methods are based on correlation functions between stimuli and responses, but these methods are guaranteed to work only in the case of Gaussian stimulus ensembles. As an alternative to correlation functions, we maximize the mutual information between the neural responses and projections of the stimulus onto low dimensional subspaces. The procedure can be done iteratively by increasing the dimensionality of this subspace. Those dimensions that allow the recovery of all of the information between spikes and the full unprojected stimuli describe the relevant subspace. If the dimensionality of the relevant subspace indeed is small, it becomes feasible to map the neuron’s input–output function even under fully natural stimulus conditions. These ideas are illustrated in simulations on model visual and auditory neurons responding to natural scenes and sounds, respectively. 1
Prediction and Decoding of Retinal Ganglion Cell Responses with a Probabilistic Spiking Model
, 2005
"... ... generation. We show that the stimulus selectivity, reliability, and timing precision of primate retinal ganglion cell (RGC) light responses can be reproduced accurately with a simple model consisting of a leaky integrateandfire spike generator driven by a linearly filtered stimulus, a postspik ..."
Abstract

Cited by 66 (21 self)
 Add to MetaCart
... generation. We show that the stimulus selectivity, reliability, and timing precision of primate retinal ganglion cell (RGC) light responses can be reproduced accurately with a simple model consisting of a leaky integrateandfire spike generator driven by a linearly filtered stimulus, a postspike current, and a Gaussian noise current. We fit model parameters for individual RGCs by maximizing the likelihood of observed spike responses to a stochastic visual stimulus. Although compact, the fitted model predicts the detailed time structure of responses to novel stimuli, accurately capturing the interaction between the spiking history and sensory stimulus selectivity. The model also accounts for the variability in responses to repeated stimuli, even when fit to data from a single (nonrepeating) stimulus sequence. Finally, the model can be used to derive an explicit, maximumlikelihood decoding rule for neural spike trains, thus providing a tool for assessing the limitations that spiking variability imposes on sensory performance.
Negative Interspike Interval Correlations Increase the Neuronal Capacity for Encoding TimeDependent Stimuli
 J. Neurosci
, 2001
"... this paper, we show that negative interspike interval (ISI) correlations, i.e., the tendency for long ISIs to be followed by short ISIs (and vice versa), reduce spike count variability, whereas positive ISI correlations increase spike count variability. Together, these effects lead to an optimal spi ..."
Abstract

Cited by 58 (16 self)
 Add to MetaCart
this paper, we show that negative interspike interval (ISI) correlations, i.e., the tendency for long ISIs to be followed by short ISIs (and vice versa), reduce spike count variability, whereas positive ISI correlations increase spike count variability. Together, these effects lead to an optimal spike counting time at which discriminability is maximal
Synergy, Redundancy, and Independence in Population Codes
 The Journal of Neuroscience
, 2003
"... A key issue in understanding the neural code for an ensemble of neurons is the nature and strength of correlations between neurons and how these correlations are related to the stimulus. The issue is complicated by the fact that there is not a single notion of independence or lack of correlation. We ..."
Abstract

Cited by 58 (0 self)
 Add to MetaCart
A key issue in understanding the neural code for an ensemble of neurons is the nature and strength of correlations between neurons and how these correlations are related to the stimulus. The issue is complicated by the fact that there is not a single notion of independence or lack of correlation. We distinguish three kinds: (1) activity independence; (2) conditional independence; and (3) information independence. Each notion is related to an information measure: the information between cells, the information between cells given the stimulus, and the synergy of cells about the stimulus, respectively. We show that these measures form an interrelated framework for evaluating contributions of signal and noise correlations to the joint information conveyed about the stimulus and that at least two of the three measures must be calculated to characterize a population code. This framework is compared with others recently proposed in the literature. In addition, we distinguish questions about how information is encoded by a population of neurons from how that information can be decoded. Although information theory is natural and powerful for questions of encoding, it is not sufficient for characterizing the process of decoding. Decoding fundamentally requires an error measure that quantifies the importance of the deviations of estimated stimuli from actual stimuli. Because there is no a priori choice of error measure, questions about decoding cannot be put on the same level of generality as for encoding.
Predictability, Complexity, and Learning
, 2001
"... We define predictive information Ipred(T) as the mutual information between the past and the future of a time series. Three qualitatively different behaviors are found in the limit of large observation times T: Ipred(T) can remain finite, grow logarithmically, or grow as a fractional power law. If t ..."
Abstract

Cited by 50 (2 self)
 Add to MetaCart
We define predictive information Ipred(T) as the mutual information between the past and the future of a time series. Three qualitatively different behaviors are found in the limit of large observation times T: Ipred(T) can remain finite, grow logarithmically, or grow as a fractional power law. If the time series allows us to learn a model with a finite number of parameters, then Ipred(T) grows logarithmically with a coefficient that counts the dimensionality of the model space. In contrast, powerlaw growth is associated, for example, with the learning of infinite parameter (or nonparametric) models such as continuous functions with smoothness constraints. There are connections between the predictive information and measures of complexity that have been defined both in learning theory and the analysis of physical systems through statistical mechanics and dynamical systems theory. Furthermore, in the same way that entropy provides the unique measure of available information consistent with some simple and plausible conditions, we argue that the divergent part of Ipred(T) provides the unique measure for the complexity of dynamics underlying a time series. Finally, we discuss how these ideas may be useful in problems in physics, statistics, and biology.
Natural Image Statistics and Divisive Normalization: Modeling Nonlinearities and Adaptation in Cortical Neurons
, 2001
"... Understanding the functional role of neurons and neural systems is a primary goal of systems neuroscience. A longstanding hypothesis states that sensory systems are matched to the statistical properties of the signals to which they are exposed [e.g., 4, 6]. In particular, Barlow has proposed that th ..."
Abstract

Cited by 49 (7 self)
 Add to MetaCart
Understanding the functional role of neurons and neural systems is a primary goal of systems neuroscience. A longstanding hypothesis states that sensory systems are matched to the statistical properties of the signals to which they are exposed [e.g., 4, 6]. In particular, Barlow has proposed that the role of early sensory systems is to remove redundancy in the sensory input, resulting in a set of neural responses that are statistically independent. Variants of this hypothesis have been formulated by a number of other authors [e.g., 2, 52] (see [47] for a review). The basic version assumes a fixed environmental model, but Barlow and Foldiak later augmented the theory by suggesting that adaptation in neural systems might be thought of as an adjustment to remove redundancies in the responses to recently presented stimuli [8, 7]. There are
Neural coding and decoding: communication channels and quantization
 Network: Computation in Neural Systems
, 2001
"... We present a novel analytical approach for studying neural encoding. As a
first step we model a neural sensory system as a communication channel.
Using the method of typical sequence in this context, we show that a
coding scheme is an almost bijective relation between equivalence classes of
stimulus ..."
Abstract

Cited by 40 (8 self)
 Add to MetaCart
We present a novel analytical approach for studying neural encoding. As a
first step we model a neural sensory system as a communication channel.
Using the method of typical sequence in this context, we show that a
coding scheme is an almost bijective relation between equivalence classes of
stimulus/response pairs. The analysis allows a quantitative determination of the
type of information encoded in neural activity patterns and, at the same time,
identification of the code with which that information is represented. Due to the
high dimensionality of the sets involved, such a relation is extremely difficult
to quantify. To circumvent this problem, and to use whatever limited data set is
available most efficiently, we use another technique from information theory—
quantization. We quantize the neural responses to a reproduction set of small
finite size. Amongmany possible quantizations, we choose one which preserves
as much of the informativeness of the original stimulus/response relation as
possible, through the use of an informationbased distortion function. This
method allows us to study coarse but highly informative approximations of a
coding scheme model, and then to refine them automatically when more data
become available.
Dynamic Analyses of Information Encoding in Neural Ensembles
 Neural Computation
, 2004
"... Neural spike train decoding algorithms and techniques to compute Shannon
mutual information are important methods for analyzing how neural
systems represent biological signals.Decoding algorithms are also one of
several strategies being used to design controls for brainmachine interfaces.
Developin ..."
Abstract

Cited by 39 (3 self)
 Add to MetaCart
Neural spike train decoding algorithms and techniques to compute Shannon
mutual information are important methods for analyzing how neural
systems represent biological signals.Decoding algorithms are also one of
several strategies being used to design controls for brainmachine interfaces.
Developing optimal strategies to desig n decoding algorithms and
compute mutual information are therefore important problems in computational
neuroscience. We present a general recursive lter decoding
algorithm based on a point process model of individual neuron spiking
activity and a linear stochastic statespace model of the biological signal.
We derive from the algorithm new instantaneous estimates of the entropy,
entropy rate, and the mutual information between the signal and
the ensemble spiking activity. We assess the accuracy of the algorithm
by computing, along with the decoding error, the true coverage probability
of the approximate 0.95 condence regions for the individual signal
estimates. We illustrate the new algorithm by reanalyzing the position
and ensemble neural spiking activity of CA1 hippocampal neurons from
two rats foraging in an open circular environment. We compare the performance
of this algorithm with a linear lter constructed by the widely
used reverse correlation method. The median decoding error for Animal
1 (2) during 10 minutes of open foraging was 5.9 (5.5) cm, the median
entropy was 6.9 (7.0) bits, the median information was 9.4 (9.4) bits, and
the true coverage probability for 0.95 condence regions was 0.67 (0.75)
using 34 (32) neurons. These ndings improve signicantly on our previous
results and suggest an integrated approach to dynamically reading
neural codes, measuring their properties, and quantifying the accuracy
with which encoded information is extracted.
The Frequency Dependence of Spike Timing Reliability in Cortical Pyramidal Cells and Interneurons.
, 2001
"... for interneurons. The observed differences in intrinsic the frequency preference between pyramidal cells and interneurons have implications for rhythmogenesis and information transmission between populations of cortical neurons. Introduction Recent analysis of the neuronal spike trains in the l ..."
Abstract

Cited by 36 (9 self)
 Add to MetaCart
for interneurons. The observed differences in intrinsic the frequency preference between pyramidal cells and interneurons have implications for rhythmogenesis and information transmission between populations of cortical neurons. Introduction Recent analysis of the neuronal spike trains in the lateral geniculate nucleus suggests that the information from the retina to the visual cortex may be transmitted by precise spike times in addition to that carried by the firing rate (1). Cortical neurons are capable of precisely initiating spikes in response to broadband fluctuating stimuli both in vitro (2; 3) and in vivo (4), although the significance of single spike timing in the cortex is debated. This raises the issue of how the cortex could take advantage of this information (5; 6). The rhythmic activity observed in the cortex may reflect internal cortical mechanisms for synchronizing populations of cortical neurons (7; 8). In this study the spiketime reliability of