Results 1  10
of
33
A theory of cortical responses
, 2005
"... This article concerns the nature of evoked brain responses and the principles underlying their generation. We start with the premise that the sensory brain has evolved to represent or infer the causes of changes in its sensory inputs. The problem of inference is well formulated in statistical terms. ..."
Abstract

Cited by 101 (21 self)
 Add to MetaCart
This article concerns the nature of evoked brain responses and the principles underlying their generation. We start with the premise that the sensory brain has evolved to represent or infer the causes of changes in its sensory inputs. The problem of inference is well formulated in statistical terms. The statistical fundaments of inference may therefore afford important constraints on neuronal implementation. By formulating the original ideas of Helmholtz on perception, in terms of modernday statistical theories, one arrives at a model of perceptual inference and learning that can explain a remarkable range of neurobiological facts. It turns out that the problems of inferring the causes of sensory input (perceptual inference) and learning the relationship between input and cause (perceptual learning) can be resolved using exactly the same principle. Specifically, both inference and learning rest on minimizing the brain’s free energy, as defined in statistical physics. Furthermore, inference and learning can proceed in a biologically plausible fashion. Cortical responses can be seen as the brain’s attempt to minimize the free energy induced by a stimulus and thereby encode the most likely cause of that stimulus. Similarly, learning emerges from changes in synaptic efficacy that minimize the free energy, averaged over all stimuli encountered. The underlying scheme rests on empirical Bayes and hierarchical models
Dynamic causal modelling of evoked potentials: a reproducibility study
 NeuroImage
, 2007
"... Dynamic causal modelling (DCM) has been applied recently to eventrelated responses (ERPs) measured with EEG/MEG. DCM attempts to explain ERPs using a network of interacting cortical sources and waveform differences in terms of coupling changes among sources. The aim of this work was to establish the ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
Dynamic causal modelling (DCM) has been applied recently to eventrelated responses (ERPs) measured with EEG/MEG. DCM attempts to explain ERPs using a network of interacting cortical sources and waveform differences in terms of coupling changes among sources. The aim of this work was to establish the validity of DCM by assessing its reproducibility across subjects. We used an oddball paradigm to elicit mismatch responses. Sources of cortical activity were modelled as equivalent current dipoles, using a biophysical informed spatiotemporal forward model that included connections among neuronal subpopulations in each source. Bayesian inversion provided estimates of changes in coupling among sources and the marginal likelihood of each model. By specifying different connectivity models we were able to evaluate three different hypotheses: differences in the ERPs to rare and frequent events are mediated by changes in forward connections (Fmodel), backward connections (Bmodel) or both (FBmodel). The results were remarkably consistent over subjects. In all but one subject, the forward model was better than the backward model. This is an important result because these models have the same number of parameters (i.e., the complexity). Furthermore, the FBmodel was significantly better than both, in 7 out of 11 subjects. This is another important result because it shows that a more complex model (that can fit the data more accurately) is not necessarily the most likely model. At the group level the FBmodel supervened. We discuss these findings in terms of the validity and usefulness of DCM in characterising EEG/ MEG data and its ability to model ERPs in a mechanistic fashion. © 2007 Elsevier Inc. All rights reserved.
Dynamic causal modelling of induced responses
 NeuroImage
, 2008
"... This paper describes a dynamic causal model (DCM) for induced or spectral responses as measured with the electroencephalogram (EEG) or the magnetoencephalogram (MEG). We model the timevarying power, over a range of frequencies, as the response of a distributed system of coupled electromagnetic sour ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
This paper describes a dynamic causal model (DCM) for induced or spectral responses as measured with the electroencephalogram (EEG) or the magnetoencephalogram (MEG). We model the timevarying power, over a range of frequencies, as the response of a distributed system of coupled electromagnetic sources to a spectral perturbation. The model parameters encode the frequency response to exogenous input and coupling among sources and different frequencies. The Bayesian inversion of this model, given data enables inferences about the parameters of a particular model and allows us to compare different models, or hypotheses. One key aspect of the model is that it differentiates between linear and nonlinear coupling; which correspond to within and betweenfrequency coupling respectively. To establish the face validity of our approach, we generate synthetic data and test the identifiability of various parameters to ensure they can be estimated accurately, under different levels of noise. We then apply our model to EEG data from a faceperception experiment, to ask whether there is evidence for nonlinear coupling between early visual cortex and fusiform areas.
A Novel Extended Granger Causal Model Approach Demonstrates Brain Hemispheric Differences During Face Recognition Learning
, 2009
"... Two main approaches in exploring causal relationships in biological systems using timeseries data are the application of Dynamic Causal model (DCM) and Granger Causal model (GCM). These have been extensively applied to brain imaging data and are also readily applicable to a wide range of temporal c ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Two main approaches in exploring causal relationships in biological systems using timeseries data are the application of Dynamic Causal model (DCM) and Granger Causal model (GCM). These have been extensively applied to brain imaging data and are also readily applicable to a wide range of temporal changes involving genes, proteins or metabolic pathways. However, these two approaches have always been considered to be radically different from each other and therefore used independently. Here we present a novel approach which is an extension of Granger Causal model and also shares the features of the bilinear approximation of Dynamic Causal model. We have first tested the efficacy of the extended GCM by applying it extensively in toy models in both time and frequency domains and then applied it to local field potential recording data collected from in vivo multielectrode array experiments. We demonstrate face discrimination learninginduced changes in inter and intrahemispheric connectivity and in the hemispheric predominance of theta and gamma frequency oscillations in sheep inferotemporal cortex. The results provide the first evidence for connectivity
Axonal velocity distributions in neural field equations. PLoS Comput Biol
"... By modelling the average activity of large neuronal populations, continuum mean field models (MFMs) have become an increasingly important theoretical tool for understanding the emergent activity of cortical tissue. In order to be computationally tractable, longrange propagation of activity in MFMs ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
By modelling the average activity of large neuronal populations, continuum mean field models (MFMs) have become an increasingly important theoretical tool for understanding the emergent activity of cortical tissue. In order to be computationally tractable, longrange propagation of activity in MFMs is often approximated with partial differential equations (PDEs). However, PDE approximations in current use correspond to underlying axonal velocity distributions incompatible with experimental measurements. In order to rectify this deficiency, we here introduce novel propagation PDEs that give rise to smooth unimodal distributions of axonal conduction velocities. We also argue that velocities estimated from fibre diameters in slice and from latency measurements, respectively, relate quite differently to such distributions, a significant point for any phenomenological description. Our PDEs are then successfully fit to fibre diameter data from human corpus callosum and rat subcortical white matter. This allows for the first time to simulate longrange conduction in the mammalian brain with realistic, convenient PDEs. Furthermore, the obtained results suggest that the propagation of activity in rat and human differs significantly beyond mere scaling. The dynamical consequences of our new formulation are investigated in the context of a well known neural field model. On the basis of Turing instability analyses, we conclude that pattern formation is more easily initiated using our more realistic propagator. By increasing characteristic conduction velocities, a smooth transition can occur from selfsustaining bulk oscillations to travelling waves of various
A mesostatespace model for EEG and MEG
, 2007
"... We present a multiscale generative model for EEG, that entails a minimum number of assumptions about evoked brain responses, namely: (1) bioelectric activity is generated by a set of distributed sources, (2) the dynamics of these sources can be modelled as random fluctuations about a small number o ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present a multiscale generative model for EEG, that entails a minimum number of assumptions about evoked brain responses, namely: (1) bioelectric activity is generated by a set of distributed sources, (2) the dynamics of these sources can be modelled as random fluctuations about a small number of mesostates, (3) mesostates evolve in a temporal structured way and are functionally connected (i.e. influence each other), and (4) the number of mesostates engaged by a cognitive task is small (e.g. between one and a few). A Variational Bayesian learning scheme is described that furnishes the posterior density on the models parameters and its evidence. Since the number of mesosources specifies the model, the model evidence can be used to compare models and find the optimum number of mesosources. In addition to estimating the dynamics at each cortical dipole, the mesostatespace model and its inversion provide a description of brain activity at the level of the mesostates (i.e. in terms of the dynamics of mesosources that are distributed over dipoles). The inclusion of a mesostate level allows one to compute posterior probability maps of each dipole being active (i.e. belonging to an active mesostate). Critically, this model accommodates constraints on the number of mesosources, while retaining the flexibility of distributed source models in explaining data. In short, it bridges the gap between standard distributed and equivalent current dipole models. Furthermore, because it is explicitly spatiotemporal, the model can embed any stochastic dynamical causal model (e.g. a neural mass model) as a Markov process prior on the mesostate dynamics. The approach is evaluated and compared to standard inverse EEG techniques, using synthetic data and real data. The results demonstrate the addedvalue of the mesostatespace model and its variational inversion.
Reviewed by:
, 2010
"... We suggested recently that attention can be understood as inferring the level of uncertainty or precision during hierarchical perception. In this paper, we try to substantiate this claim using neuronal simulations of directed spatial attention and biased competition. These simulations assume that ne ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We suggested recently that attention can be understood as inferring the level of uncertainty or precision during hierarchical perception. In this paper, we try to substantiate this claim using neuronal simulations of directed spatial attention and biased competition. These simulations assume that neuronal activity encodes a probabilistic representation of the world that optimizes freeenergy in a Bayesian fashion. Because freeenergy bounds surprise or the (negative) logevidence for internal models of the world, this optimization can be regarded as evidence accumulation or (generalized) predictive coding. Crucially, both predictions about the state of the world generating sensory data and the precision of those data have to be optimized. Here, we show that if the precision depends on the states, one can explain many aspects of attention. We illustrate this in the context of the Posner paradigm, using the simulations to generate both psychophysical and electrophysiological responses. These simulated responses are consistent with attentional bias or gating, competition for attentional resources, attentional capture and associated speedaccuracy tradeoffs. Furthermore, if we present both attended and nonattended stimuli simultaneously, biased competition for neuronal representation emerges as a principled and straightforward property of Bayesoptimal perception.
NeuroImage 42 (2008) 649–662 Contents lists available at ScienceDirect
"... journal homepage: www.elsevier.com/locate/ynimg ..."
NeuroImage 45 (2009) 453–462 Contents lists available at ScienceDirect
"... journal homepage: www.elsevier.com/locate/ynimg Forward and backward connections in the brain: A DCM study of functional asymmetries ..."
Abstract
 Add to MetaCart
journal homepage: www.elsevier.com/locate/ynimg Forward and backward connections in the brain: A DCM study of functional asymmetries