Results 1  10
of
272
Orientation Tuning of Input Conductance, Excitation, and Inhibition in Cat Primary Visual Cortex
 J. NEUROPHYSIOL
, 2000
"... ..."
Maximum likelihood estimation of a stochastic integrateandfire neural model
 NIPS
, 2003
"... We examine a cascade encoding model for neural response in which a linear filtering stage is followed by a noisy, leaky, integrateandfire spike generation mechanism. This model provides a biophysically more realistic alternative to models based on Poisson (memoryless) spike generation, and can eff ..."
Abstract

Cited by 63 (21 self)
 Add to MetaCart
(Show Context)
We examine a cascade encoding model for neural response in which a linear filtering stage is followed by a noisy, leaky, integrateandfire spike generation mechanism. This model provides a biophysically more realistic alternative to models based on Poisson (memoryless) spike generation, and can effectively reproduce a variety of spiking behaviors seen in vivo. We describe the maximum likelihood estimator for the model parameters, given only extracellular spike train responses (not intracellular voltage data). Specifically, we prove that the log likelihood function is concave and thus has an essentially unique global maximum that can be found using gradient ascent techniques. We develop an efficient algorithm for computing the maximum likelihood solution, demonstrate the effectiveness of the resulting estimator with numerical simulations, and discuss a method of testing the model’s validity using timerescaling and density evolution techniques. Paninski et al., November 30, 2004 2 1
Negative Interspike Interval Correlations Increase the Neuronal Capacity for Encoding TimeDependent Stimuli
 J. Neurosci
, 2001
"... this paper, we show that negative interspike interval (ISI) correlations, i.e., the tendency for long ISIs to be followed by short ISIs (and vice versa), reduce spike count variability, whereas positive ISI correlations increase spike count variability. Together, these effects lead to an optimal spi ..."
Abstract

Cited by 44 (15 self)
 Add to MetaCart
(Show Context)
this paper, we show that negative interspike interval (ISI) correlations, i.e., the tendency for long ISIs to be followed by short ISIs (and vice versa), reduce spike count variability, whereas positive ISI correlations increase spike count variability. Together, these effects lead to an optimal spike counting time at which discriminability is maximal
Impact of correlated synaptic input on output firing rate and variability in simple neuronal models
 Journal of Neuroscience
, 2000
"... Cortical neurons are typically driven by thousands of synaptic inputs. The arrival of a spike from one input may or may not be correlated with the arrival of other spikes from different inputs. How does this interdependence alter the probability that the postsynaptic neuron will fire? We constructed ..."
Abstract

Cited by 41 (1 self)
 Add to MetaCart
(Show Context)
Cortical neurons are typically driven by thousands of synaptic inputs. The arrival of a spike from one input may or may not be correlated with the arrival of other spikes from different inputs. How does this interdependence alter the probability that the postsynaptic neuron will fire? We constructed a simple random walk model in which the membrane potential of a target neuron fluctuates stochastically, driven by excitatory and inhibitory spikes arriving at random times. An analytic expression was derived for the mean output firing rate as a function of the firing rates and pairwise correlations of the inputs. This stochastic model made three quantitative predictions. (1) Correlations between pairs of excitatory or inhibitory inputs increase the fluctuations in synaptic drive, whereas correlations between excitatory–inhibitory pairs decrease them. (2) When excitation and inhibition are fully balanced (the mean net synaptic drive is zero),
SpikeFrequency Adaptation of a Generalized Leaky IntegrateandFire Model Neuron
 JOURNAL OF COMPUTATIONAL NEUROSCIENCE
, 2001
"... Although spikefrequency adaptation is a commonly observed property of neurons, its functional implications are still poorly understood. In this work, using a leaky integrateandfire neural model that includes a activated K + current (I AHP ), we develop a quantitative theory of adaptation tempo ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
Although spikefrequency adaptation is a commonly observed property of neurons, its functional implications are still poorly understood. In this work, using a leaky integrateandfire neural model that includes a activated K + current (I AHP ), we develop a quantitative theory of adaptation temporal dynamics and compare our results with recent in vivo intracellular recordings from pyramidal cells in the cat visual cortex. Experimentally testable relations between the degree and the time constant of spikefrequency adaptation are predicted. We also contrast the I AHP model with an alternative adaptation model based on a dynamical firing threshold. Possible roles of adaptation in temporal computation are explored, as a a timedelayed neuronal selfinhibition mechanism. Our results include the following: (1) given the same firing rate, the variability of interspike intervals (ISIs) is either reduced or enhanced by adaptation, depending on whether the I AHP dynamics is fast or slow compared with the mean ISI in the output spike train; (2) when the inputs are Poissondistributed (uncorrelated), adaptation generates temporal anticorrelation between ISIs, we suggest that measurement of this negative correlation provides a probe to assess the strength of I AHP in vivo; (3) the forward masking effect produced by the slow dynamics of I AHP is nonlinear and effective at selecting the strongest input among competing sources of input signals.
On the Complexity of Computing and Learning with Multiplicative Neural Networks
 NEURAL COMPUTATION
"... In a great variety of neuron models neural inputs are combined using the summing operation. We introduce the concept of multiplicative neural networks that contain units which multiply their inputs instead of summing them and, thus, allow inputs to interact nonlinearly. The class of multiplicative n ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
In a great variety of neuron models neural inputs are combined using the summing operation. We introduce the concept of multiplicative neural networks that contain units which multiply their inputs instead of summing them and, thus, allow inputs to interact nonlinearly. The class of multiplicative neural networks comprises such widely known and well studied network types as higherorder networks and product unit networks. We investigate the complexity of computing and learning for multiplicative neural networks. In particular, we derive upper and lower bounds on the VapnikChervonenkis (VC) dimension and the pseudo dimension for various types of networks with multiplicative units. As the most general case, we consider feedforward networks consisting of product and sigmoidal units, showing that their pseudo dimension is bounded from above by a polynomial with the same order of magnitude as the currently best known bound for purely sigmoidal networks. Moreover, we show that this bound holds even in the case when the unit type, product or sigmoidal, may be learned. Crucial for these results are calculations of solution set components bounds for new network classes. As to lower bounds we construct product unit networks of fixed depth with superlinear VC dimension. For sigmoidal networks of higher order we establish polynomial bounds that, in contrast to previous results, do not involve any restriction of the network order. We further consider various classes of higherorder units, also known as sigmapi units, that are characterized by connectivity constraints. In terms of these we derive some asymptotically tight bounds.
A new look at statespace models for neural data
 Journal of Computational Neuroscience
, 2010
"... State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in statespace models with nonGaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these appro ..."
Abstract

Cited by 30 (20 self)
 Add to MetaCart
(Show Context)
State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in statespace models with nonGaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these approximations, but that nonetheless retain the computational efficiency of the approximate methods. We discuss a variety of examples, applying these direct optimization techniques to problems in spike train smoothing, stimulus decoding, parameter estimation, and inference of synaptic properties. Along the way, we point out connections to some related standard statistical methods, including spline smoothing and isotonic regression. Finally, we note that the computational methods reviewed here do not in fact depend on the statespace setting at all; instead, the key property we are exploiting involves the bandedness of certain matrices. We close by discussing some applications of this more general point of view, including Markov chain Monte Carlo methods for neural decoding and efficient estimation of spatiallyvarying firing rates.
SelfOrganizing Continuous Attractor Networks and Motor Function
 Network: Computation in Neural Systems
, 2002
"... Motor skill learning may involve training a neural system to automatically perform sequences of movements, with the training signals provided by a different system, used mainly during training to perform the movements, that operates under visual sensory guidance. We use a dynamical systems perspecti ..."
Abstract

Cited by 29 (10 self)
 Add to MetaCart
Motor skill learning may involve training a neural system to automatically perform sequences of movements, with the training signals provided by a different system, used mainly during training to perform the movements, that operates under visual sensory guidance. We use a dynamical systems perspective to show how complex motor sequences could be learned by the automatic system. The network uses a continuous attractor network architecture to perform path integration on an efference copy of the motor signal to keep track of the current state, and selection of which motor cells to activate by a movement selector input where the selection depends on the current state being represented in the continuous attractor network. After training, the correct motor sequence may be selected automatically by a single movement selection signal. A feature of the model presented is the use of `trace' learning rules which incorporate a form of temporal average of recent cell activity. This form of temporal learning underlies the ability of the networks to learn temporal sequences of behaviour. We show that the continuous attractor network models developed here are able to demonstrate the key features of motor function. That is, (i) the movement can occur at arbitrary speeds; (ii) the movement can occur with arbitrary force; (iii) the agent spends the same relative proportions of its time in each part of the motor sequence; (iv) the agent applies the same relative force in each part of the motor sequence; and (v) the actions always occur in the same sequence.
Characterization of Subthreshold Voltage Fluctuations in Neuronal Membranes
, 2003
"... Synaptic noise due to intense network activity can have a significant impact on the electrophysiological properties of individual neurons. This is the case for the cerebral cortex, where ongoing activity leads to strong barrages of synaptic inputs, which act as the main source of synaptic noise affe ..."
Abstract

Cited by 29 (13 self)
 Add to MetaCart
Synaptic noise due to intense network activity can have a significant impact on the electrophysiological properties of individual neurons. This is the case for the cerebral cortex, where ongoing activity leads to strong barrages of synaptic inputs, which act as the main source of synaptic noise affecting on neuronal dynamics. Here, we characterize the subthreshold behavior of neuronal models in which synaptic noise is represented by either additive or multiplicative noise, described by OrnsteinUhlenbeck processes. We derive and solve the FokkerPlanck equation for this system, which describes the time evolution of the probability density function for the membrane potential. We obtain an analytic expression for the membrane potential distribution at steady state and compare this expression with the subthreshold activity obtained in HodgkinHuxleytype models with stochastic synaptic inputs. The differences between multiplicative and additive noise models suggest that multiplicative noise is adequate to describe the highconductance states similar to in vivo conditions. Because the steadystate membrane potential distribution is easily obtained experimentally, this approach provides a possible method to estimate the mean and variance of synaptic conductances in real neurons.