Results 1  10
of
72
The TimeRescaling Theorem and Its Application to Neural Spike Train Data Analysis
 NEURAL COMPUTATION
, 2001
"... Measuring agreement between a statistical model and a spike train data series, that is, evaluating goodness of fit, is crucial for establishing the model’s validity prior to using it to make inferences about a particular neural system. Assessing goodnessoffit is a challenging problem for point pro ..."
Abstract

Cited by 72 (15 self)
 Add to MetaCart
Measuring agreement between a statistical model and a spike train data series, that is, evaluating goodness of fit, is crucial for establishing the model’s validity prior to using it to make inferences about a particular neural system. Assessing goodnessoffit is a challenging problem for point process neural spike train models, especially for histogrambased models such as perstimulus time histograms (PSTH) and rate functions estimated by spike train smoothing. The timerescaling theorem is a wellknown result in probability theory, which states that any point process with an integrable conditional intensity function may be transformed into a Poisson process with unit rate. We describe how the theorem may be used to develop goodnessoffit tests for both parametric and histogrambased point process models of neural spike trains. We apply these tests in two examples: a comparison of PSTH, inhomogeneous Poisson, and inhomogeneous Markov interval models of neural spike trains from the sup
Recursive Bayesian Decoding of Motor Cortical Signals
 Journal of Neurophysiology
, 2004
"... The population vector (PV) algorithm and optimal linear estimation (OLE) have been used to reconstruct movement by combining signals from multiple neurons in the motor cortex. While these linear methods are eective, recursive Bayesian decoding schemes, which are nonlinear, can be more powerful wh ..."
Abstract

Cited by 61 (8 self)
 Add to MetaCart
The population vector (PV) algorithm and optimal linear estimation (OLE) have been used to reconstruct movement by combining signals from multiple neurons in the motor cortex. While these linear methods are eective, recursive Bayesian decoding schemes, which are nonlinear, can be more powerful when probability model assumptions are satis ed. We have implemented a recursive Bayesian algorithm for reconstructing hand movement from neurons in the motor cortex. The algorithm uses a recentlydeveloped numerical method known as \particle ltering", and follows the same general strategy as that used by Brown et al. (1998) to reconstruct the path of a foraging rat from hippocampal place cells. We investigated the method in a numerical simulation study in which neural ring rate was assumed to be positive, but otherwise a linear function of movement velocity, and preferred directions were not uniformly distributed. In terms of meansquared error, the approach was roughly ten times more ecient than the PV algorithm and ve times more ecient than OLE. Thus, use of recursive Bayesian decoding can achieve the accuracy of the PV algorithm (or OLE) with roughly ten times (or ve times) fewer neurons. The method was also used to reconstruct hand movement in an ellipsedrawing task from 258 cells in the ventral premotor cortex. Recursive Bayesian decoding was again more ecient than the PV and OLE methods, by factors of roughly seven and three, respectively.
Bayesian Population Decoding of Motor Cortical Activity Using a Kalman Filter
, 2005
"... Effective neural motor prostheses require a method for decoding neural activity representing desired movement. In particular, the accurate reconstruction of a continuous motion signal is necessary for the control of devices such as computer cursors, robots, or a patient's own paralyzed limbs. For su ..."
Abstract

Cited by 48 (7 self)
 Add to MetaCart
Effective neural motor prostheses require a method for decoding neural activity representing desired movement. In particular, the accurate reconstruction of a continuous motion signal is necessary for the control of devices such as computer cursors, robots, or a patient's own paralyzed limbs. For such applications we developed a realtime system that uses Bayesian inference techniques to estimate hand motion from the firing rates of multiple neurons. In this study, we used recordings that were previously made in the arm area of primary motor cortex in awake behaving monkeys using a chronically implanted multielectrode microarray. Bayesian inference involves computing the posterior probability of the hand motion conditioned on a sequence of observed firing rates; this is formulated in terms of the product of a likelihood and a prior. The likelihood term models the probability of firing rates given a particular hand motion. We found that a linear Gaussian model could be used to approximate this likelihood and could be readily learned from a small amount of training data. The prior term defines a probabilistic model of hand kinematics and was also taken to be a linear Gaussian model. Decoding was performed using a Kalman filter which gives an efficient recursive method for Bayesian inference when the likelihood and prior are linear and Gaussian. In offline experiments, the Kalmanfilter reconstructions of hand trajectory were more accurate than previously reported results. The resulting decoding algorithm provides a principled probabilistic model of motorcortical coding, decodes hand motion in real time, provides an estimate of uncertainty, and is straightfor3 ward to implement. Additionally the formulation unifies and extends previous models of neural coding while prov...
Estimating a StateSpace Model from Point Process Observations
, 2003
"... A widely used signal processing paradigm is the statespace model. The statespace model is defined by two equations: an observation equation that describes how the hidden state or latent process is observed and a state equation that defines the evolution of the process through time. Inspired by neu ..."
Abstract

Cited by 39 (4 self)
 Add to MetaCart
A widely used signal processing paradigm is the statespace model. The statespace model is defined by two equations: an observation equation that describes how the hidden state or latent process is observed and a state equation that defines the evolution of the process through time. Inspired by neurophysiology experiments in which neural spiking activity is induced by an implicit (latent) stimulus, we develop an algorithm to estimate a statespace model observed through point process measurements. We represent the latent process modulating the neural spiking activity as a gaussian autoregressive model driven by an external stimulus. Given the latent process, neural spiking activity is characterized as a general point process defined by its conditional intensity function. We develop an approximate expectationmaximization (EM) algorithm to estimate the unobservable statespace process, its parameters, and the parameters of the point process. The EM algorithm combines a point process recursive nonlinear filter algorithm, the fixed interval smoothing algorithm, and the statespace covariance algorithm to compute the complete data log likelihood efficiently. We use a KolmogorovSmirnov test based on the timerescaling theorem to evaluate agreement between the model and point process data. We illustrate the model with two simulated data examples: an ensemble of Poisson neurons driven by a common stimulus and a single neuron whose conditional intensity function is approximated as a local Bernoulli process.
Neural Decoding of Cursor Motion Using a Kalman Filter
, 2003
"... The direct neural control of external devices such as computer displays or prosthetic limbs requires the accurate decoding of neural activity representing continuous movement. We develop a realtime control system using the spiking activity of approximately 40 neurons recorded with an electrode arra ..."
Abstract

Cited by 31 (11 self)
 Add to MetaCart
The direct neural control of external devices such as computer displays or prosthetic limbs requires the accurate decoding of neural activity representing continuous movement. We develop a realtime control system using the spiking activity of approximately 40 neurons recorded with an electrode array implanted in the arm area of primary motor cortex. In contrast to previous work, we develop a controltheoretic approach that explicitly models the motion of the hand and the probabilistic relationship between this motion and the mean firing rates of the cells in 70ms bins. We focus on a realistic cursor control task in which the subject must move a cursor to "hit" randomly placed targets on a computer monitor. Encoding and decoding of the neural data is achieved with a Kalman filter which has a number of advantages over previous linear filtering techniques. In particular, the Kalman filter reconstructions of hand trajectories in offline experiments are more accurate than previously reported results and the model provides insights into the nature of the neural coding of movement.
Statistical models for neural encoding, decoding, and optimal stimulus design
 Computational Neuroscience: Progress in Brain Research
, 2006
"... There are two basic problems in the statistical analysis of neural data. The “encoding” problem concerns how information is encoded in neural spike trains: can we predict the spike trains of a neuron (or population of neurons), given an arbitrary stimulus or observed motor response? Conversely, the ..."
Abstract

Cited by 31 (15 self)
 Add to MetaCart
There are two basic problems in the statistical analysis of neural data. The “encoding” problem concerns how information is encoded in neural spike trains: can we predict the spike trains of a neuron (or population of neurons), given an arbitrary stimulus or observed motor response? Conversely, the “decoding ” problem concerns how much information is in a spike train: in particular, how well can we estimate the stimulus that gave rise to the spike train? This chapter describes statistical modelbased techniques that in some cases provide a unified solution to these two coding problems. These models can capture stimulus dependencies as well as spike history and interneuronal interaction effects in population spike trains, and are intimately related to biophysicallybased models of integrateandfire type. We describe flexible, powerful likelihoodbased methods for fitting these encoding models and then for using the models to perform optimal decoding. Each of these (apparently quite difficult) tasks turn out to be highly computationally tractable, due to a key concavity property of the model likelihood. Finally, we return to the encoding problem to describe how to use these models to adaptively optimize the stimuli presented to the cell on a trialbytrial basis, in order that we may infer the optimal model parameters as efficiently as possible.
Commoninput models for multiple neural spiketrain data
 Data, Network: Comput. Neural Syst
, 2006
"... Recent developments in multielectrode recordings enable the simultaneous measurement of the spiking activity of many neurons. Analysis of such multineuronal data is one of the key challenges in computational neuroscience today. In this work, we develop a multivariate pointprocess model in which th ..."
Abstract

Cited by 30 (17 self)
 Add to MetaCart
Recent developments in multielectrode recordings enable the simultaneous measurement of the spiking activity of many neurons. Analysis of such multineuronal data is one of the key challenges in computational neuroscience today. In this work, we develop a multivariate pointprocess model in which the observed activity of a network of neurons depends on three terms: 1) the experimentallycontrolled stimulus; 2) the spiking history of the observed neurons; and 3) a latent noise source that corresponds, for example, to “common input ” from an unobserved population of neurons that is presynaptic to two or more cells in the observed population. We develop an expectationmaximization algorithm for fitting the model parameters; here the expectation step is based on a continuoustime implementation of the extended Kalman smoother, and the maximization step involves two concave maximization problems which may be solved in parallel. The techniques developed allow us to solve a variety of inference problems in a straightforward, computationally efficient fashion; for example, we may use the model to predict network activity given an arbitrary stimulus, infer a neuron’s firing rate given the stimulus and the activity of the other observed neurons, and perform optimal stimulus decoding and prediction. We present several detailed simulation studies which explore the strengths and limitations of our approach. 1
Synergy, Redundancy, and Independence in Population Codes
 The Journal of Neuroscience
, 2003
"... A key issue in understanding the neural code for an ensemble of neurons is the nature and strength of correlations between neurons and how these correlations are related to the stimulus. The issue is complicated by the fact that there is not a single notion of independence or lack of correlation. We ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
A key issue in understanding the neural code for an ensemble of neurons is the nature and strength of correlations between neurons and how these correlations are related to the stimulus. The issue is complicated by the fact that there is not a single notion of independence or lack of correlation. We distinguish three kinds: (1) activity independence; (2) conditional independence; and (3) information independence. Each notion is related to an information measure: the information between cells, the information between cells given the stimulus, and the synergy of cells about the stimulus, respectively. We show that these measures form an interrelated framework for evaluating contributions of signal and noise correlations to the joint information conveyed about the stimulus and that at least two of the three measures must be calculated to characterize a population code. This framework is compared with others recently proposed in the literature. In addition, we distinguish questions about how information is encoded by a population of neurons from how that information can be decoded. Although information theory is natural and powerful for questions of encoding, it is not sufficient for characterizing the process of decoding. Decoding fundamentally requires an error measure that quantifies the importance of the deviations of estimated stimuli from actual stimuli. Because there is no a priori choice of error measure, questions about decoding cannot be put on the same level of generality as for encoding.
A new look at statespace models for neural data
 Journal of Computational Neuroscience
, 2010
"... State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in statespace models with nonGaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these appro ..."
Abstract

Cited by 28 (19 self)
 Add to MetaCart
State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in statespace models with nonGaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these approximations, but that nonetheless retain the computational efficiency of the approximate methods. We discuss a variety of examples, applying these direct optimization techniques to problems in spike train smoothing, stimulus decoding, parameter estimation, and inference of synaptic properties. Along the way, we point out connections to some related standard statistical methods, including spline smoothing and isotonic regression. Finally, we note that the computational methods reviewed here do not in fact depend on the statespace setting at all; instead, the key property we are exploiting involves the bandedness of certain matrices. We close by discussing some applications of this more general point of view, including Markov chain Monte Carlo methods for neural decoding and efficient estimation of spatiallyvarying firing rates.
Dynamic Analyses of Information Encoding in Neural Ensembles
 Neural Computation
, 2004
"... Neural spike train decoding algorithms and techniques to compute Shannon
mutual information are important methods for analyzing how neural
systems represent biological signals.Decoding algorithms are also one of
several strategies being used to design controls for brainmachine interfaces.
Developin ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
Neural spike train decoding algorithms and techniques to compute Shannon
mutual information are important methods for analyzing how neural
systems represent biological signals.Decoding algorithms are also one of
several strategies being used to design controls for brainmachine interfaces.
Developing optimal strategies to desig n decoding algorithms and
compute mutual information are therefore important problems in computational
neuroscience. We present a general recursive lter decoding
algorithm based on a point process model of individual neuron spiking
activity and a linear stochastic statespace model of the biological signal.
We derive from the algorithm new instantaneous estimates of the entropy,
entropy rate, and the mutual information between the signal and
the ensemble spiking activity. We assess the accuracy of the algorithm
by computing, along with the decoding error, the true coverage probability
of the approximate 0.95 condence regions for the individual signal
estimates. We illustrate the new algorithm by reanalyzing the position
and ensemble neural spiking activity of CA1 hippocampal neurons from
two rats foraging in an open circular environment. We compare the performance
of this algorithm with a linear lter constructed by the widely
used reverse correlation method. The median decoding error for Animal
1 (2) during 10 minutes of open foraging was 5.9 (5.5) cm, the median
entropy was 6.9 (7.0) bits, the median information was 9.4 (9.4) bits, and
the true coverage probability for 0.95 condence regions was 0.67 (0.75)
using 34 (32) neurons. These ndings improve signicantly on our previous
results and suggest an integrated approach to dynamically reading
neural codes, measuring their properties, and quantifying the accuracy
with which encoded information is extracted.