Results 1  10
of
19
A fully Bayesian approach to the parcelbased detectionestimation of brain activity in fMRI
, 2008
"... ..."
Signal modeling and classification using a robust latent space model based on t distributions
 IEEE Transactions on Signal Processing
, 2008
"... Factor analysis is a statistical covariance modeling technique based on the assumption of normally distributed data. A mixture of factor analyzers can be hence viewed as a special case of Gaussian (normal) mixture models providing a mathematically sound framework for attribute space dimensionality r ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Factor analysis is a statistical covariance modeling technique based on the assumption of normally distributed data. A mixture of factor analyzers can be hence viewed as a special case of Gaussian (normal) mixture models providing a mathematically sound framework for attribute space dimensionality reduction. A significant shortcoming of mixtures of factor analyzers is the vulnerability of normal distributions to outliers. Recently, the replacement of normal distributions with the heaviertailed Student’st distributions has been proposed as a way to mitigate these shortcomings and the treatment of the resulting model under an expectationmaximization (EM) algorithm framework has been conducted. In this paper we develop a Bayesian approach to factor analysis modelling based on Student’st distributions. We derive a tractable variational inference algorithm for this model by expressing the Student’st distributed factor analyzers as a marginalization over additional latent variables. Our innovative approach provides an efficient and more robust alternative to EMbased methods, resolving their singularity and overfitting proneness problems, while allowing for the automatic determination of the optimal model size. We demonstrate the superiority of the proposed model over wellknown covariance modeling techniques in a wide range of signal processing applications. I.
Dynamic causal models and physiological inference: a validation study using isoflurane anaesthesia in rodents. preprint
, 2011
"... Generative models of neuroimaging and electrophysiological data present new opportunities for accessing hidden or latent brain states. Dynamic causal modeling (DCM) uses Bayesian model inversion and selection to infer the synaptic mechanisms underlying empirically observed brain responses. DCM for e ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Generative models of neuroimaging and electrophysiological data present new opportunities for accessing hidden or latent brain states. Dynamic causal modeling (DCM) uses Bayesian model inversion and selection to infer the synaptic mechanisms underlying empirically observed brain responses. DCM for electrophysiological data, in particular, aims to estimate the relative strength of synaptic transmission at different cell types and via specific neurotransmitters. Here, we report a DCM validation study concerning inference on excitatory and inhibitory synaptic transmission, using different doses of a volatile anaesthetic agent (isoflurane) to parametrically modify excitatory and inhibitory synaptic processing while recording local field potentials (LFPs) from primary auditory cortex (A1) and the posterior auditory field (PAF) in the auditory belt region in rodents. We test whether DCM can infer, from the LFP measurements, the expected druginduced changes in synaptic transmission mediated via fast ionotropic receptors; i.e., excitatory (glutamatergic) AMPA and inhibitory GABAA receptors. Cross and autospectra from the two regions were used to optimise three DCMs based on biologically plausible neural mass models and specific network architectures. Consistent with known extrinsic connectivity patterns in sensory hierarchies, we found that a model comprising forward connections from A1 to PAF and backward connections from PAF to A1 outperformed a model with forward connections from PAF to A1 and backward connections from A1 to PAF and a model with reciprocal lateral connections. The parameter estimates from the most plausible model indicated that the
Inequality in Life Spans and Mortality Convergence Across Industrialized Countries
, 2005
"... The second half of the twentieth century witnessed much convergence in life expectancy around the world. Closer inspection of mortality trends in advanced countries reveals that inequality in adult life spans, which we measure with the standard deviation of ages at death above age 10, S10, is increa ..."
Abstract
 Add to MetaCart
The second half of the twentieth century witnessed much convergence in life expectancy around the world. Closer inspection of mortality trends in advanced countries reveals that inequality in adult life spans, which we measure with the standard deviation of ages at death above age 10, S10, is increasingly responsible for the remaining divergence in mortality. We report striking differences in level and trend of S10 across industrialized countries since 1960, which cannot be explained by aggregate socioeconomic inequality or differential externalcause mortality. Rather, S10 reflects both within and betweengroup inequalities in life spans and conveys new information about their combined magnitudes and trends. These findings suggest that the challenge for health policies in this century is to reduce inequality, not just lengthen life. The human condition has improved tremendously during the course of modern development. At the beginning of the nineteenth century, life expectancy at birth, e0, hovered between 25 to 40 years (Maddison, 2001). Industrialization and unprecedented growth in percapita incomes coincided with significant gains in e0, which by 1960 reached roughly 70 years among
LOWER AND UPPER BOUNDS FOR APPROXIMATION OF THE KULLBACKLEIBLER DIVERGENCE BETWEEN GAUSSIAN MIXTURE MODELS
"... Many speech technology systems rely on Gaussian Mixture Models (GMMs). The need for a comparison between two GMMs arises in applications such as speaker verification, model selection or parameter estimation. For this purpose, the KullbackLeibler (KL) divergence is often used. However, since there i ..."
Abstract
 Add to MetaCart
(Show Context)
Many speech technology systems rely on Gaussian Mixture Models (GMMs). The need for a comparison between two GMMs arises in applications such as speaker verification, model selection or parameter estimation. For this purpose, the KullbackLeibler (KL) divergence is often used. However, since there is no closed form expression to compute it, it can only be approximated. We propose lower and upper bounds for the KL divergence, which lead to a new approximation and interesting insights into previously proposed approximations. An application to the comparison of speaker models also shows how such approximations can be used to validate assumptions on the models. Index Terms — Gaussian Mixture Model (GMM), KullbackLeibler Divergence, speaker comparison, speech processing.
Research Article Iterative Estimation Algorithms Using Conjugate Function Lower Bound and MinorizationMaximization with Applications in Image Denoising
"... A fundamental problem in signal processing is to estimate signal from noisy observations. This is usually formulated as an optimization problem. Optimizations based on variational lower bound and minorizationmaximization have been widely used in machine learning research, signal processing, and sta ..."
Abstract
 Add to MetaCart
(Show Context)
A fundamental problem in signal processing is to estimate signal from noisy observations. This is usually formulated as an optimization problem. Optimizations based on variational lower bound and minorizationmaximization have been widely used in machine learning research, signal processing, and statistics. In this paper, we study iterative algorithms based on the conjugate function lower bound (CFLB) and minorizationmaximization (MM) for a class of objective functions. We propose a generalized version of these two algorithms and show that they are equivalent when the objective function is convex and differentiable. We then develop a CFLB/MM algorithm for solving the MAP estimation problems under a linear Gaussian observation model. We modify this algorithm for waveletdomain image denoising. Experimental results show that using a single wavelet representation the performance of the proposed algorithms makes better than that of the bishrinkage algorithm which is arguably one of the best in recent publications. Using complex wavelet representations, the performance of the proposed algorithm is very competitive with that of the stateoftheart algorithms. Copyright © 2008 G. Deng and W.Y. Ng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1.
Variational Learning for Gaussian Mixture Models
"... Abstract—This paper proposes a joint maximum likelihood and Bayesian methodology for estimating Gaussian mixture models. In Bayesian inference, the distributions of parameters are modeled, characterized by hyperparameters. In the case of Gaussian mixtures, the distributions of parameters are conside ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—This paper proposes a joint maximum likelihood and Bayesian methodology for estimating Gaussian mixture models. In Bayesian inference, the distributions of parameters are modeled, characterized by hyperparameters. In the case of Gaussian mixtures, the distributions of parameters are considered as Gaussian for the mean, Wishart for the covariance, and Dirichlet for the mixing probability. The learning task consists of estimating the hyperparameters characterizing these distributions. The integration in the parameter space is decoupled using an unsupervised variational methodology entitled variational expectation–maximization (VEM). This paper introduces a hyperparameter initialization procedure for the training algorithm. In the first stage, distributions of parameters resulting from successive runs of the expectation–maximization algorithm are formed. Afterward, maximumlikelihood estimators are applied to find appropriate initial values for the hyperparameters. The proposed initialization provides faster convergence, more accurate hyperparameter estimates, and better generalization for the VEM training algorithm. The proposed methodology is applied in blind signal detection and in color image segmentation. Index Terms—Bayesian inference, expectation–maximization algorithm, Gaussian mixtures, maximum loglikelihood estimation,
MixtureBased Extension of the AR Model and its Recursive Bayesian Identification
"... is studied, which allows transformations and distortions on the regressor to be handled. Many important signal processing problems are amenable to this Extended AR (i.e. EAR) model. It is shown that Bayesian identification and prediction of the EAR model can be performed recursively, in common with ..."
Abstract
 Add to MetaCart
(Show Context)
is studied, which allows transformations and distortions on the regressor to be handled. Many important signal processing problems are amenable to this Extended AR (i.e. EAR) model. It is shown that Bayesian identification and prediction of the EAR model can be performed recursively, in common with the AR model itself. The EAR model does, however, require that the transformation be known. When it is unknown, the associated transformation space is represented by a finite set of candidates. What follows is a Mixturebased EAR model, i.e. the MEAR model. An approximate identification algorithm for MEAR is developed, using a restricted Variational Bayes (VB) procedure. It preserves the efficient recursive update of sufficient statistics. The MEAR model is applied to the robust identification of AR processes corrupted by outliers and burst noise respectively, and to click removal for speech. Index Terms — Bayesian identification, probabilistic mixtures, sufficient statistics, recursive identification, Variational Bayes,
VARIATIONAL SEGMENTATION OF COLOR IMAGES
"... A variational Bayesian framework is employed in the paper for image segmentation using color clustering. A Gaussian mixture model is used to represent color distributions. Variational expectationmaximization (VEM) algorithm takes into account the uncertainty in the parameter estimation ensuring a l ..."
Abstract
 Add to MetaCart
(Show Context)
A variational Bayesian framework is employed in the paper for image segmentation using color clustering. A Gaussian mixture model is used to represent color distributions. Variational expectationmaximization (VEM) algorithm takes into account the uncertainty in the parameter estimation ensuring a lower bound on the approximation error. In the variational Bayesian approach we integrate over distributions of parameters. The processing task in this case consists of estimating the hyperparameters of these distributions. We propose a maximum loglikelihood initialization approach for the Variational ExpectationMaximization (VEM) algorithm. The proposed algorithm is applied to image segmentation using color clustering when representing the images in the L*u*v color coordinate system. 1.