Results 1  10
of
20
Maximum likelihood estimation of a stochastic integrateandfire neural model
 NIPS
, 2003
"... We examine a cascade encoding model for neural response in which a linear filtering stage is followed by a noisy, leaky, integrateandfire spike generation mechanism. This model provides a biophysically more realistic alternative to models based on Poisson (memoryless) spike generation, and can eff ..."
Abstract

Cited by 59 (20 self)
 Add to MetaCart
We examine a cascade encoding model for neural response in which a linear filtering stage is followed by a noisy, leaky, integrateandfire spike generation mechanism. This model provides a biophysically more realistic alternative to models based on Poisson (memoryless) spike generation, and can effectively reproduce a variety of spiking behaviors seen in vivo. We describe the maximum likelihood estimator for the model parameters, given only extracellular spike train responses (not intracellular voltage data). Specifically, we prove that the log likelihood function is concave and thus has an essentially unique global maximum that can be found using gradient ascent techniques. We develop an efficient algorithm for computing the maximum likelihood solution, demonstrate the effectiveness of the resulting estimator with numerical simulations, and discuss a method of testing the model’s validity using timerescaling and density evolution techniques. Paninski et al., November 30, 2004 2 1
Remarks concerning graphical models for time series and point processes
 Revista de Econometria
, 1996
"... Uma rede estatística é uma cole,cão de nós representando variáveis aleatórias e um conjunto de arestas que ligam os nós. Um modelo estocástico por isso e chamado um modelo gráfico. Estes modelos, de gráficos e redes, sáo particularmente úteis para examinar as dependéncias estatísticas baseadas em co ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
Uma rede estatística é uma cole,cão de nós representando variáveis aleatórias e um conjunto de arestas que ligam os nós. Um modelo estocástico por isso e chamado um modelo gráfico. Estes modelos, de gráficos e redes, sáo particularmente úteis para examinar as dependéncias estatísticas baseadas em condi,coes do tipo das que ocorrem frequentemente em economia e estatística. Neste artigo as variáveis aleatórias dos nós serão séries temporais ou processos pontuais. Os casos de gráfos direcionados e nãodirecionados são apresentados. A statistical network is a collection of nodes representing random variables and a set of edges that connect the nodes. A probabilistic model for such is called a graphical model. These models, graphs and networks are particularly useful for examining statistical dependencies based on conditioning as often occurs in economics and statistics. In this paper the nodal random variables will be time series or point proceses. The cases of undirected and directed graphs are focussed on.
Modelbased decoding, information estimation, and changepoint detection in multineuron spike trains
 UNDER REVIEW, NEURAL COMPUTATION
, 2007
"... Understanding how stimulus information is encoded in spike trains is a central problem in computational neuroscience. Decoding methods provide an important tool for addressing this problem, by allowing us to explicitly read out the information contained in spike responses. Here we introduce several ..."
Abstract

Cited by 19 (12 self)
 Add to MetaCart
Understanding how stimulus information is encoded in spike trains is a central problem in computational neuroscience. Decoding methods provide an important tool for addressing this problem, by allowing us to explicitly read out the information contained in spike responses. Here we introduce several decoding methods based on pointprocess neural encoding models (i.e. “forward ” models that predict spike responses to novel stimuli). These models have concave loglikelihood functions, allowing for efficient fitting via maximum likelihood. Moreover, we may use the likelihood of the observed spike trains under the model to perform optimal decoding. We present: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus — the most probable stimulus to have generated the observed single or multiplespike train response, given some prior distribution over the stimulus; (2) a Gaussian approximation to the posterior distribution, which allows us to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the response; and (4) a framework for the detection of changepoint times (e.g. the time at which the stimulus undergoes a change in mean or variance), by marginalizing over the posterior distribution of stimuli. We show several examples illustrating the performance of these estimators with simulated data.
Sequential optimal design of neurophysiology experiments
, 2008
"... Adaptively optimizing experiments has the potential to significantly reduce the number of trials needed to build parametric statistical models of neural systems. However, application of adaptive methods to neurophysiology has been limited by severe computational challenges. Since most neurons are hi ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
Adaptively optimizing experiments has the potential to significantly reduce the number of trials needed to build parametric statistical models of neural systems. However, application of adaptive methods to neurophysiology has been limited by severe computational challenges. Since most neurons are high dimensional systems, optimizing neurophysiology experiments requires computing highdimensional integrations and optimizations in real time. Here we present a fast algorithm for choosing the most informative stimulus by maximizing the mutual information between the data and the unknown parameters of a generalized linear model (GLM) which we want to fit to the neuron’s activity. We rely on important logconcavity and asymptotic normality properties of the posterior to facilitate the required computations. Our algorithm requires only lowrank matrix manipulations and a 2dimensional search to choose the optimal stimulus. The average running time of these operations scales quadratically with the dimensionality of the GLM, making realtime adaptive experimental design feasible even for highdimensional stimulus and parameter spaces. For example, we
Efficient Markov Chain Monte Carlo methods for decoding population spike trains
 TO APPEAR, NEURAL COMPUTATION
, 2010
"... Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed ..."
Abstract

Cited by 16 (11 self)
 Add to MetaCart
Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed into the spike trains of a group of neurons. The form of the GLM likelihood ensures that the posterior distribution over the stimuli that caused an observed set of spike trains is logconcave so long as the prior is. This allows the maximum a posteriori (MAP) stimulus estimate to be obtained using efficient optimization algorithms. Unfortunately, the MAP estimate can have a relatively large average error when the posterior is highly nonGaussian. Here we compare several Markov chain Monte Carlo (MCMC) algorithms that allow for the calculation of general Bayesian estimators involving posterior expectations (conditional on model parameters). An efficient version of the hybrid Monte Carlo (HMC) algorithm was significantly superior to other MCMC methods for Gaussian priors. When the prior distribution has sharp edges and corners, on the other hand, the “hitandrun” algorithm performed better than other MCMC methods. Using these
Inferring neuronal network connectivity from spike data: A temporal datamining approach. Scienti c Programming
, 2008
"... Understanding the functioning of a neural system in terms of its underlying circuitry is an important problem in neuroscience. Recent developments in electrophysiology and imaging allow one to simultaneously record activities of hundreds of neurons. Inferring the underlying neuronal connectivity pat ..."
Abstract

Cited by 13 (11 self)
 Add to MetaCart
Understanding the functioning of a neural system in terms of its underlying circuitry is an important problem in neuroscience. Recent developments in electrophysiology and imaging allow one to simultaneously record activities of hundreds of neurons. Inferring the underlying neuronal connectivity patterns from such multineuronal spike train data streams is a challenging statistical and computational problem. This task involves finding significant temporal patterns from vast amounts of symbolic time series data. In this paper we show that the frequent episode mining methods from the field of temporal data mining can be very useful in this context. In the frequent episode discovery framework, the data is viewed as a sequence of events, each of which is characterized by an event type and its time of occurrence and episodes are certain types of temporal patterns in such data. Here we show that, using the set of discovered frequent episodes from multineuronal data, one can infer different types of connectivity patterns in the neural system that generated it. For this purpose, we introduce the notion of mining for frequent episodes under certain temporal constraints; the structure of these temporal constraints is motivated by the application. We present algorithms for discovering serial and parallel episodes under these temporal constraints. Through extensive simulation studies we demonstrate that these methods are useful for unearthing patterns of neuronal network connectivity.
Statistical models of spike trains
 In C. Liang, & G. Lord (Eds.), Stochastic methods in neuroscience
, 2009
"... 1 Spiking neurons make inviting targets for analytical methods based on stochastic processes: spike trains carry information in their temporal patterning, yet they are often highly irregular across time and across experimental replications. The bulk of this volume is devoted to mathematical and biop ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
1 Spiking neurons make inviting targets for analytical methods based on stochastic processes: spike trains carry information in their temporal patterning, yet they are often highly irregular across time and across experimental replications. The bulk of this volume is devoted to mathematical and biophysical models useful in understanding neurophysiological processes. In this chapter we consider statistical models for analyzing spike train data. Strictly speaking, what we would call a statistical model for spike trains is simply a probabilistic description of the sequence of spikes. But it is somewhat misleading to ignore the dataanalytical context of these models. In particular, we want to make use of these probabilistic tools for the purpose of scientific inference. The leap from simple descriptive uses of probability to inferential applications is worth emphasizing for two reasons. First, this leap was one of the great conceptual advances in science, taking roughly two hundred years. It was not until the late 1700s that there emerged any clear notion of inductive (or what we would now call statistical) reasoning; it was not until the first half of the twentieth century that modern methods began to be developed systematically; and it was only in the second half of the twentieth century that these methods
Assessment of synchrony in multiple neural spike trains using loglinear point process models. Annals of Applied Statistics
, 2011
"... Neural spike trains, which are sequences of very brief jumps in voltage across the cell membrane, were one of the motivating applications for the development of point process methodology. Early work required the assumption of stationarity, but contemporary experiments often use timevarying stimuli ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Neural spike trains, which are sequences of very brief jumps in voltage across the cell membrane, were one of the motivating applications for the development of point process methodology. Early work required the assumption of stationarity, but contemporary experiments often use timevarying stimuli and produce timevarying neural responses. More recently, many statistical methods have been developed for nonstationary neural point process data. There has also been much interest in identifying synchrony, meaning events across two or more neurons that are nearly simultaneous at the time scale of the recordings. A natural statistical approach is to discretize time, using short time bins, and to introduce loglinear models for dependency among neurons, but previous use of loglinear modeling technology has assumed stationarity. We introduce a succinct yet powerful class of timevarying loglinear models by (a) allowing individualneuron effects (main effects) to involve timevarying intensities; (b) also allowing the individualneuron effects to involve autocovariation effects (history effects) due to past spiking, (c) assuming excess synchrony effects (interaction effects) do not depend on history, and (d) assuming all effects vary smoothly across time. Using data from the primary visual cortex of an anesthetized monkey, we give two examples in which the rate of synchronous spiking cannot be explained by stimulusrelated changes in individualneuron effects. In one example, the excess synchrony disappears when slowwave “up ” states are taken into account as history effects, while in the second example it does not. Standard point process theory explicitly rules out synchronous events. To justify our use of continuoustime methodology, we introduce a framework that incorporates synchronous events and provides continuoustime loglinear point process approximations to discretetime loglinear models. 1. Introduction. One
Assessing Connections in Networks of Biological Neurons
, 1997
"... In this work spike trains of firing times of neurons recorded from various locations in the cat's auditory thalamus are studied. A goal is making inferences concerning connections amongst different regions of the thalamus in both the presence and the absence of a stimulus. Both secondorder moment ( ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In this work spike trains of firing times of neurons recorded from various locations in the cat's auditory thalamus are studied. A goal is making inferences concerning connections amongst different regions of the thalamus in both the presence and the absence of a stimulus. Both secondorder moment (frequency domain) and full likelihood analyses (a threshold crossing model), are carried through. 1 Introduction The sequence of spikes of a neuron, referred to as a "spike train", may carry important information processed by the brain and thus may underlie cognitive functions and sensory perception [1]. The data studied are recorded stretches of point processes corresponding to the firing times of Statistics Department, University of California, Berkeley y Institute of Physiology, University of Lausanne, Switzerland Pars dorsalis (D) Pars lateralis (PL) Pars magnocellularis (M) Auditory Cortex RE Input Figure 1: A block diagram of the auditory regions of the cat's brain. neurons mea...