Results 1  10
of
43
Dynamic causal modelling of evoked potentials: a reproducibility study
 NeuroImage
, 2007
"... Dynamic causal modelling (DCM) has been applied recently to eventrelated responses (ERPs) measured with EEG/MEG. DCM attempts to explain ERPs using a network of interacting cortical sources and waveform differences in terms of coupling changes among sources. The aim of this work was to establish the ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
(Show Context)
Dynamic causal modelling (DCM) has been applied recently to eventrelated responses (ERPs) measured with EEG/MEG. DCM attempts to explain ERPs using a network of interacting cortical sources and waveform differences in terms of coupling changes among sources. The aim of this work was to establish the validity of DCM by assessing its reproducibility across subjects. We used an oddball paradigm to elicit mismatch responses. Sources of cortical activity were modelled as equivalent current dipoles, using a biophysical informed spatiotemporal forward model that included connections among neuronal subpopulations in each source. Bayesian inversion provided estimates of changes in coupling among sources and the marginal likelihood of each model. By specifying different connectivity models we were able to evaluate three different hypotheses: differences in the ERPs to rare and frequent events are mediated by changes in forward connections (Fmodel), backward connections (Bmodel) or both (FBmodel). The results were remarkably consistent over subjects. In all but one subject, the forward model was better than the backward model. This is an important result because these models have the same number of parameters (i.e., the complexity). Furthermore, the FBmodel was significantly better than both, in 7 out of 11 subjects. This is another important result because it shows that a more complex model (that can fit the data more accurately) is not necessarily the most likely model. At the group level the FBmodel supervened. We discuss these findings in terms of the validity and usefulness of DCM in characterising EEG/ MEG data and its ability to model ERPs in a mechanistic fashion. © 2007 Elsevier Inc. All rights reserved.
Dynamic causal modelling for fMRI: A twostate model
 NeuroImage
, 2008
"... Dynamical causal modelling (DCM) for functional magnetic resonance imaging (fMRI) is a technique to infer directed connectivity among brain regions. These models distinguish between a neuronal level, which models neuronal interactions among regions, and an observation level, which models the hemodyn ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
Dynamical causal modelling (DCM) for functional magnetic resonance imaging (fMRI) is a technique to infer directed connectivity among brain regions. These models distinguish between a neuronal level, which models neuronal interactions among regions, and an observation level, which models the hemodynamic responses each region. The original DCM formulation considered only one neuronal state per region. In this work, we adopt a more plausible and less constrained neuronal model, using two neuronal states (populations) per region. Critically, this gives us an explicit model of intrinsic (betweenpopulation) connectivity within a region. In addition, by using positivity constraints, the model conforms to the organization of real cortical hierarchies, whose extrinsic connections are excitatory (glutamatergic). By incorporating two populations within each region we can model selective changes in both extrinsic and intrinsic connectivity. Using synthetic data, we show that the twostate model is internal consistent and identifiable. We then apply the model to real data, explicitly modelling intrinsic connections. Using model comparison, we found that the twostate model is better than the singlestate model. Furthermore, using the twostate model we find that it is possible to disambiguate between subtle changes in coupling; we were able to show that attentional gain, in the context of visual motion processing, is accounted for sufficiently by an increased sensitivity of excitatory populations of neurons in V5, to forward afferents from earlier visual areas.
Dynamic causal modelling of induced responses
 NeuroImage
, 2008
"... This paper describes a dynamic causal model (DCM) for induced or spectral responses as measured with the electroencephalogram (EEG) or the magnetoencephalogram (MEG). We model the timevarying power, over a range of frequencies, as the response of a distributed system of coupled electromagnetic sour ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
(Show Context)
This paper describes a dynamic causal model (DCM) for induced or spectral responses as measured with the electroencephalogram (EEG) or the magnetoencephalogram (MEG). We model the timevarying power, over a range of frequencies, as the response of a distributed system of coupled electromagnetic sources to a spectral perturbation. The model parameters encode the frequency response to exogenous input and coupling among sources and different frequencies. The Bayesian inversion of this model, given data enables inferences about the parameters of a particular model and allows us to compare different models, or hypotheses. One key aspect of the model is that it differentiates between linear and nonlinear coupling; which correspond to within and betweenfrequency coupling respectively. To establish the face validity of our approach, we generate synthetic data and test the identifiability of various parameters to ensure they can be estimated accurately, under different levels of noise. We then apply our model to EEG data from a faceperception experiment, to ask whether there is evidence for nonlinear coupling between early visual cortex and fusiform areas.
Recognizing recurrent neural networks (rrnn): Bayesian inference for recurrent neural networks. Biological cybernetics
, 2012
"... Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an oversimplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made both neurobiologically more plausible and computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a ’recognizing RNN ’ (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, for example, fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of recurrent neural networks may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics. 1
Reviewed by:
, 2010
"... We suggested recently that attention can be understood as inferring the level of uncertainty or precision during hierarchical perception. In this paper, we try to substantiate this claim using neuronal simulations of directed spatial attention and biased competition. These simulations assume that ne ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We suggested recently that attention can be understood as inferring the level of uncertainty or precision during hierarchical perception. In this paper, we try to substantiate this claim using neuronal simulations of directed spatial attention and biased competition. These simulations assume that neuronal activity encodes a probabilistic representation of the world that optimizes freeenergy in a Bayesian fashion. Because freeenergy bounds surprise or the (negative) logevidence for internal models of the world, this optimization can be regarded as evidence accumulation or (generalized) predictive coding. Crucially, both predictions about the state of the world generating sensory data and the precision of those data have to be optimized. Here, we show that if the precision depends on the states, one can explain many aspects of attention. We illustrate this in the context of the Posner paradigm, using the simulations to generate both psychophysical and electrophysiological responses. These simulated responses are consistent with attentional bias or gating, competition for attentional resources, attentional capture and associated speedaccuracy tradeoffs. Furthermore, if we present both attended and nonattended stimuli simultaneously, biased competition for neuronal representation emerges as a principled and straightforward property of Bayesoptimal perception.
Comments The Brain Connectivity Workshops: Moving the frontiers of computational systems neuroscience
"... ..."
(Show Context)
NeuroImage 48 (2009) 269–279 Contents lists available at ScienceDirect
"... journal homepage: www.elsevier.com/locate/ynimg ..."
(Show Context)
a K.E. Stephan, a R.B. Reilly,
, 2007
"... We present a neural mass model of steadystate membrane potentials measured with local field potentials or electroencephalography in the frequency domain. This model is an extended version of previous dynamic causal models for investigating eventrelated potentials in the timedomain. In this paper, ..."
Abstract
 Add to MetaCart
(Show Context)
We present a neural mass model of steadystate membrane potentials measured with local field potentials or electroencephalography in the frequency domain. This model is an extended version of previous dynamic causal models for investigating eventrelated potentials in the timedomain. In this paper, we augment the previous formulation with parameters that mediate spikerate adaptation and recurrent intrinsic inhibitory connections. We then use linear systems analysis to show how the model's spectral response changes with its neurophysiological parameters. We demonstrate that much of the interesting behaviour depends on the nonlinearity which couples mean membrane potential to mean spiking rate. This nonlinearity is analogous, at the population level, to the firing rate–input curves often used to characterize singlecell responses. This function depends on the model's gain and adaptation currents which, neurobiologically, are influenced by the activity of modulatory neurotransmitters. The key contribution of this paper is to show how neuromodulatory effects can be modelled by adding adaptation currents to a simple phenomenological model of EEG. Critically, we show that these effects are expressed in a systematic way in the spectral density of EEG recordings. Inversion of the model, given such noninvasive recordings, should allow one to quantify pharmacologically induced changes in adaptation currents. In short, this work establishes a forward or generative model of electrophysiological recordings for psychopharmacological studies. © 2007 Elsevier Inc. All rights reserved.