Results 11  20
of
109
Optimizing Experimental Design for Comparing Models of Brain Function
, 2011
"... This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observ ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the LaplaceChernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. MonteCarlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we
Concepts of connectivity and human epileptic activity.
, 2011
"... We are motivated by the following two perceived needs: firstly, to relate the various measures of connectivity found in the field of epilepsy research to the more general language of functional and effective connectivity as used in neuroscience (neuroimaging) and secondly, to gage the potential ben ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
We are motivated by the following two perceived needs: firstly, to relate the various measures of connectivity found in the field of epilepsy research to the more general language of functional and effective connectivity as used in neuroscience (neuroimaging) and secondly, to gage the potential benefits of applying stateoftheart connectivity methods to answer scientific questions raised within the field of human epilepsy research. We begin by reviewing basic principles of connectivity, followed by a description of connectivity measurement and quantification methodologies. We then review some of the main findings of basic studies of connectivity in epilepsy, focusing on human data but making essential links to animal studies. The last part of this review describes the latest developments in models of coupled (distributed) generators of EEG/MEG and fMRI signals, with the view of scrutinizing their possible role as a bridge between scales of understanding in epilepsy. Definitions, PrinciPles, anD the characterization of connectivity in ePilePsy Because of the history of brain connectivity analysis, the functional/ effective dichotomy is a convenient starting point for our discussion. As we shall see, a priori, both forms of connectivity are aimed at identifying the presence and strength of connections between network nodes and, when possible, their directionality. However, a further ambition of effective connectivity is to allow the inference of (biophysical) mechanisms by which causal links are expressed in measured neuroimaging signals. The study of effective connectivity is, therefore, usually a more modelbased (or hypothesisdriven) approach than that of functional connectivity. It is worth noting that the term functional connectivity is not commonly encountered in the field of epilepsy, particularly in relation to EEG data although its use has increased recently in view of growing interest in restingstate fMRI data. introDuction The brain is essentially an electrochemical network. Connectivity is at the center of the problem of Epilepsy since its defining element is the occurrence of seizures, which essentially are periods of abnormal interneuronal synchrony. Unanswered questions that are central to an improved understanding of the mechanisms of epilepsy include some which implicate connectivity directly, such as: Why does ictal activity spread? Why do seizures persist in some patients, following surgical resection? Why do focal insults often give rise to recurrent seizures, i.e., epilepsy? And some which do so less directly: Why do spike and wave discharges and seizures occur when they do? Why does the spatial relationship between the generators of interictal discharges and seizures vary between patients? Answers to these questions would fundamentally improve our ability to eliminate seizures. The difficulty of pinning down the concept of brain connectivity has already been noted Here we focus on connectivity of neuronal activity, reflected in electrophysiological (LFP, EEG, MEG) and hemodynamic (functional MRI, fMRI) signals measured in humans and animals, but with reference to structural connectivity when possible. In the following, we will focus on connectivity assessed in relation to events or to (transient) brain states. This review attempts to place the concept of connectivity from increasingly sophisticated neuroimaging data analysis methodologies within the field of epilepsy research. We introduce the more principled connectivity terminology developed recently in neuroimaging and review some of the key concepts related to the characterization of propagation of epileptic activity using what may be called traditional correlationbased studies based on EEG. We then show how essentially similar methodologies, and more recently models addressing causality, have been used to characterize wholebrain and regional networks using functional MRI data. Following a discussion of our current understanding of the neuronal system aspects of the onset and propagation of epileptic discharges and seizures, we discuss the most advanced and ambitious framework to attempt to fully characterize epileptic networks based on neuroimaging data. Concepts of connectivity and human epileptic activity
Embodied Inference: or “I think therefore I am, if I am what I think”
"... This chapter considers situated and embodied cognition in terms of the freeenergy principle. The freeenergy formulation starts with the premise that biological agents must actively resist a natural tendency to disorder. It appeals to the idea that agents are essentially inference machines that ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
This chapter considers situated and embodied cognition in terms of the freeenergy principle. The freeenergy formulation starts with the premise that biological agents must actively resist a natural tendency to disorder. It appeals to the idea that agents are essentially inference machines that
GaussMarkovPotts Priors for Images in Computer Tomography Resulting to Joint Reconstruction and segmentation
, 2007
"... In many applications of Computed Tomography (CT), we may know that the object under the test is composed of a finite number of materials meaning that the images to be reconstructed are composed of a finite number of homogeneous area. To account for this prior knowledge, we propose a family of Gauss ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
In many applications of Computed Tomography (CT), we may know that the object under the test is composed of a finite number of materials meaning that the images to be reconstructed are composed of a finite number of homogeneous area. To account for this prior knowledge, we propose a family of GaussMarkov fields with hidden Potts label fields. Then, using these models in a Bayesian inference framework, we are able to jointly reconstruct the images and segment them in an optimal way. In this paper, we first present these prior models, then propose appropriate MCMC or variational methods to compute the mean posterior estimators. We finally show a few results showing the efficiency of the proposed methods for CT with limited angle and number of projections. Keywords: Computed Tomography; GaussMarkovPotts Priors; Bayesian computation; MCMC; Joint Segmentation and Reconstruction 1 This discretized presentation of CT, gives the possibility to analyse the most classical methods of image reconstruction [3, 4]. For example, it is very easy to see that the solution ̂f = H t g = ∑ l H t l gl (5) corresponds to the classical Backprojection (BP) and the minimum norm solution of Hf = g: ̂f = H t (HH t) −1 g = ∑ l H t l (HlH t l) −1 gl (6) can be identified to the classical Filtered Backprojection (FBP) and the least squares (LS) solution ̂f = (H t H) −1 H t g (7) can be identified to the Backprojection and Filtering (BPF). Also, defining the LS criterion
Variational Bayes with GaussMarkovPotts prior models for joint image restoration and segmentation
 in proceedings of The International Conference on Computer Vision Theory and Applications (VISAPP) (VISAPP
, 2008
"... In this paper, we propose a family of nonhomogeneous GaussMarkov fields with Potts region labels model for images to be used in a Bayesian estimation framework, in order to jointly restore and segment images degraded by a known point spread function and additive noise. The joint posterior law of a ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
In this paper, we propose a family of nonhomogeneous GaussMarkov fields with Potts region labels model for images to be used in a Bayesian estimation framework, in order to jointly restore and segment images degraded by a known point spread function and additive noise. The joint posterior law of all the unknowns ( the unknown image, its segmentation hidden variable and all the hyperparameters) is approximated by a separable probability laws via the variational Bayes technique. This approximation gives the possibility to obtain practically implemented joint restoration and segmentation algorithm. We will present some preliminary results and comparison with a MCMC Gibbs sampling based algorithm 1
VBA: a probabilistic treatment of nonlinear models for neurobiological and behavioural data
 PLoS Comput. Biol
, 2014
"... This work is in line with an ongoing effort tending toward a computational (quantitative and refutable) understanding of human neurocognitive processes. Many sophisticated models for behavioural and neurobiological data have flourished during the past decade. Most of these models are partly unspec ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
This work is in line with an ongoing effort tending toward a computational (quantitative and refutable) understanding of human neurocognitive processes. Many sophisticated models for behavioural and neurobiological data have flourished during the past decade. Most of these models are partly unspecified (i.e. they have unknown parameters) and nonlinear. This makes them difficult to peer with a formal statistical data analysis framework. In turn, this compromises the reproducibility of modelbased empirical studies. This work exposes a software toolbox that provides generic, efficient and robust probabilistic solutions to the three problems of modelbased analysis of empirical data: (i) data simulation, (ii) parameter estimation/model selection, and (iii) experimental design optimization.
Variational Bayes and Mean Field Approximations for Markov field unsupervised estimation
"... We consider the problem of parameter estimation of Markovian models where the exact computation of the partition function is not possible or computationally too expensive with MCMC methods. The main idea is then to approximate the expression of the likelihood by a simpler one where we can either hav ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
We consider the problem of parameter estimation of Markovian models where the exact computation of the partition function is not possible or computationally too expensive with MCMC methods. The main idea is then to approximate the expression of the likelihood by a simpler one where we can either have an analytical expression or compute it more efficiently. We consider two approaches: Variational Bayes Approximation (VBA) and Mean Field Approximation (MFA) and study the properties of such approximations and their effects on the estimation of the parameters.
Reviewed by:
, 2010
"... We suggested recently that attention can be understood as inferring the level of uncertainty or precision during hierarchical perception. In this paper, we try to substantiate this claim using neuronal simulations of directed spatial attention and biased competition. These simulations assume that ne ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We suggested recently that attention can be understood as inferring the level of uncertainty or precision during hierarchical perception. In this paper, we try to substantiate this claim using neuronal simulations of directed spatial attention and biased competition. These simulations assume that neuronal activity encodes a probabilistic representation of the world that optimizes freeenergy in a Bayesian fashion. Because freeenergy bounds surprise or the (negative) logevidence for internal models of the world, this optimization can be regarded as evidence accumulation or (generalized) predictive coding. Crucially, both predictions about the state of the world generating sensory data and the precision of those data have to be optimized. Here, we show that if the precision depends on the states, one can explain many aspects of attention. We illustrate this in the context of the Posner paradigm, using the simulations to generate both psychophysical and electrophysiological responses. These simulated responses are consistent with attentional bias or gating, competition for attentional resources, attentional capture and associated speedaccuracy tradeoffs. Furthermore, if we present both attended and nonattended stimuli simultaneously, biased competition for neuronal representation emerges as a principled and straightforward property of Bayesoptimal perception.
Weight Uncertainty in Neural Networks
"... We introduce a new, efficient, principled and backpropagationcompatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expecte ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We introduce a new, efficient, principled and backpropagationcompatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in nonlinear regression problems, and how this weight uncertainty can be used to drive the explorationexploitation tradeoff in reinforcement learning. 1.