Results 1  10
of
73
A unified Bayesian framework for MEG/EEG source imaging
 Neuroimage
, 2009
"... The illposed nature of the MEG (or related EEG) source localization problem requires the incorporation of prior assumptions when choosing an appropriate solution out of an infinite set of candidates. Bayesian approaches are useful in this capacity because they allow these assumptions to be explicit ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
The illposed nature of the MEG (or related EEG) source localization problem requires the incorporation of prior assumptions when choosing an appropriate solution out of an infinite set of candidates. Bayesian approaches are useful in this capacity because they allow these assumptions to be explicitly quantified using postulated prior distributions. However, the means by which these priors are chosen, as well as the estimation and inference procedures that are subsequently adopted to affect localization, have led to a daunting array of algorithms with seemingly very different properties and assumptions. From the vantage point of a simple Gaussian scale mixture model with flexible covariance components, this paper analyzes and extends several broad categories of Bayesian inference directly applicable to source localization including empirical Bayesian approaches, standard MAP estimation, and multiple variational Bayesian (VB) approximations. Theoretical properties related to convergence, global and local minima, and localization bias are analyzed and fast algorithms are derived that improve upon existing methods. This perspective leads to explicit connections between many established algorithms and suggests natural extensions for handling unknown dipole orientations, extended source configurations, correlated sources, temporal smoothness, and computational expediency. Specific imaging methods elucidated under this paradigm include weighted minimum ℓ2norm, FOCUSS, MCE, VESTAL, sLORETA, ReML and covariance component estimation, beamforming, variational Bayes, the Laplace approximation, and automatic relevance determination (ARD). Perhaps surprisingly, all of these methods can be formulated as particular cases of covariance component estimation using different concave regularization terms and optimization rules, making general theoretical analyses and algorithmic extensions/improvements particularly relevant. I.
Comparing hemodynamic models with DCM
 NeuroImage
, 2007
"... The classical model of blood oxygen leveldependent (BOLD) responses by Buxton et al. [Buxton, R.B., Wong, E.C., Frank, L.R., 1998. Dynamics of blood flow and oxygenation changes during brain activation: the Balloon model. Magn. Reson. Med. 39, 855–864] has been very important in providing a biophys ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
The classical model of blood oxygen leveldependent (BOLD) responses by Buxton et al. [Buxton, R.B., Wong, E.C., Frank, L.R., 1998. Dynamics of blood flow and oxygenation changes during brain activation: the Balloon model. Magn. Reson. Med. 39, 855–864] has been very important in providing a biophysically plausible framework for explaining different aspects of hemodynamic responses. It also plays an important role in the hemodynamic forward model for dynamic causal modeling (DCM) of fMRI data. A recent study by Obata et al. [Obata, T., Liu, T.T., Miller, K.L., Luh, W.M., Wong, E.C., Frank, L.R., Buxton, R.B., 2004. Discrepancies between BOLD and flow dynamics in primary and supplementary motor areas: application of the Balloon model to the interpretation of BOLD transients. NeuroImage 21, 144–153] linearized the BOLD signal equation and suggested a revised form for the model coefficients. In this paper, we show that the classical and revised models are special
Variational filtering
, 2008
"... This note presents a simple Bayesian filtering scheme, using variational calculus, for inference on the hidden states of dynamic systems. Variational filtering is a stochastic scheme that propagates particles over a changing variational energy landscape, such that their sample density approximates t ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
This note presents a simple Bayesian filtering scheme, using variational calculus, for inference on the hidden states of dynamic systems. Variational filtering is a stochastic scheme that propagates particles over a changing variational energy landscape, such that their sample density approximates the conditional density of hidden and states and inputs. The key innovation, on which variational filtering rests, is a formulation in generalised coordinates of motion. This renders the scheme much simpler and more versatile than existing approaches, such as those based on particle filtering. We demonstrate variational filtering using simulated and real data from hemodynamic systems studied in neuroimaging and provide comparative evaluations using particle filtering and the fixedform homologue of variational filtering, namely dynamic expectation maximisation.
Bayesian decoding of brain images
, 2008
"... This paper introduces a multivariate Bayesian (MVB) scheme to decode or recognise brain states from neuroimages. It resolves the illposed manytoone mapping, from voxel values or data features to a target variable, using a parametric empirical or hierarchical Bayesian model. This model is inverted ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
This paper introduces a multivariate Bayesian (MVB) scheme to decode or recognise brain states from neuroimages. It resolves the illposed manytoone mapping, from voxel values or data features to a target variable, using a parametric empirical or hierarchical Bayesian model. This model is inverted using standard variational techniques, in this case expectation maximisation, to furnish the model evidence and the conditional density of the model’s parameters. This allows one to compare different models or hypotheses about the mapping from functional or structural anatomy to perceptual and behavioural consequences (or their deficits). We frame this approach in terms of decoding measured brain states to predict or classify outcomes using the rhetoric established in pattern classification of neuroimaging data. However, the aim of MVB is not to predict (because the outcomes are known) but to enable inference on different models of structure– function mappings; such as distributed and sparse representations. This allows
Post hoc Bayesian model selection
"... This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal noncommercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or sel ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal noncommercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit:
Canonical Source Reconstruction for MEG
, 2007
"... We describe a simple and efficient solution to the problem of reconstructing electromagnetic sources into a canonical or standard anatomical space. Its simplicity rests upon incorporating subjectspecific anatomy into the forward model in a way that eschews the need for cortical surface extraction. ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
We describe a simple and efficient solution to the problem of reconstructing electromagnetic sources into a canonical or standard anatomical space. Its simplicity rests upon incorporating subjectspecific anatomy into the forward model in a way that eschews the need for cortical surface extraction. The forward model starts with a canonical cortical mesh, defined in a standard stereotactic space. The mesh is warped, in a nonlinear fashion, to match the subject’s anatomy. This warping is the inverse of the transformation derived from spatial normalization of the subject’s structural MRI image, using fully automated procedures that have been established for other imaging modalities. Electromagnetic lead fields are computed using the warped mesh, in conjunction with a spherical head model (which does not rely on individual anatomy). The ensuing forward model is inverted using an empirical Bayesian scheme that we have described previously in several publications. Critically, because anatomical information enters the forward model, there is no need to spatially normalize the reconstructed source activity. In other words, each source, comprising the mesh, has a predetermined and unique anatomical attribution within standard stereotactic space. This enables the pooling of data from multiple subjects and the reporting of results in stereotactic coordinates. Furthermore, it allows the graceful fusion of fMRI and MEG data within the same anatomical framework.
Joint NDT image restoration and segmentation using Gauss–Markov– Potts prior models and variational bayesian computation
 IEEE Transactions on Image Processing
, 2010
"... In this paper, we propose a method to simultaneously restore and to segment piecewise homogenous images degraded by a known point spread function (PSF) and additive noise. For this purpose, we propose a family of nonhomogeneous GaussMarkov fields with Potts region labels model for images to be use ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we propose a method to simultaneously restore and to segment piecewise homogenous images degraded by a known point spread function (PSF) and additive noise. For this purpose, we propose a family of nonhomogeneous GaussMarkov fields with Potts region labels model for images to be used in a Bayesian estimation framework. The joint posterior law of all the unknowns (the unknown image, its segmentation (hidden variable) and all the hyperparameters) is approximated by a separable probability law via the variational Bayes technique. This approximation gives the possibility to obtain practically implemented joint restoration and segmentation algorithm. We will present some preliminary results and comparison with a MCMC Gibbs sampling based algorithm. We may note that the prior models proposed in this work are particularly appropriate for the images of the scenes or objects that are composed of a finite set of homogeneous materials. This is the case of many images obtained in nondestructive testing (NDT) applications.
GaussMarkovPotts Priors for Images in Computer Tomography Resulting to Joint Reconstruction and segmentation
, 2007
"... In many applications of Computed Tomography (CT), we may know that the object under the test is composed of a finite number of materials meaning that the images to be reconstructed are composed of a finite number of homogeneous area. To account for this prior knowledge, we propose a family of Gauss ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
In many applications of Computed Tomography (CT), we may know that the object under the test is composed of a finite number of materials meaning that the images to be reconstructed are composed of a finite number of homogeneous area. To account for this prior knowledge, we propose a family of GaussMarkov fields with hidden Potts label fields. Then, using these models in a Bayesian inference framework, we are able to jointly reconstruct the images and segment them in an optimal way. In this paper, we first present these prior models, then propose appropriate MCMC or variational methods to compute the mean posterior estimators. We finally show a few results showing the efficiency of the proposed methods for CT with limited angle and number of projections. Keywords: Computed Tomography; GaussMarkovPotts Priors; Bayesian computation; MCMC; Joint Segmentation and Reconstruction 1 This discretized presentation of CT, gives the possibility to analyse the most classical methods of image reconstruction [3, 4]. For example, it is very easy to see that the solution ̂f = H t g = ∑ l H t l gl (5) corresponds to the classical Backprojection (BP) and the minimum norm solution of Hf = g: ̂f = H t (HH t) −1 g = ∑ l H t l (HlH t l) −1 gl (6) can be identified to the classical Filtered Backprojection (FBP) and the least squares (LS) solution ̂f = (H t H) −1 H t g (7) can be identified to the Backprojection and Filtering (BPF). Also, defining the LS criterion
Comparing dynamic causal models using AIC, BIC and free energy
 NeuroImage
, 2012
"... ..."
(Show Context)
Variational Bayes with GaussMarkovPotts prior models for joint image restoration and segmentation
 in proceedings of The International Conference on Computer Vision Theory and Applications (VISAPP) (VISAPP
, 2008
"... In this paper, we propose a family of nonhomogeneous GaussMarkov fields with Potts region labels model for images to be used in a Bayesian estimation framework, in order to jointly restore and segment images degraded by a known point spread function and additive noise. The joint posterior law of a ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
In this paper, we propose a family of nonhomogeneous GaussMarkov fields with Potts region labels model for images to be used in a Bayesian estimation framework, in order to jointly restore and segment images degraded by a known point spread function and additive noise. The joint posterior law of all the unknowns ( the unknown image, its segmentation hidden variable and all the hyperparameters) is approximated by a separable probability laws via the variational Bayes technique. This approximation gives the possibility to obtain practically implemented joint restoration and segmentation algorithm. We will present some preliminary results and comparison with a MCMC Gibbs sampling based algorithm 1