Results 1  10
of
10
A fully Bayesian approach to the parcelbased detectionestimation of brain activity in fMRI
, 2008
"... ..."
Signal modeling and classification using a robust latent space model based on t distributions
 IEEE Transactions on Signal Processing
, 2008
"... Factor analysis is a statistical covariance modeling technique based on the assumption of normally distributed data. A mixture of factor analyzers can be hence viewed as a special case of Gaussian (normal) mixture models providing a mathematically sound framework for attribute space dimensionality r ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Factor analysis is a statistical covariance modeling technique based on the assumption of normally distributed data. A mixture of factor analyzers can be hence viewed as a special case of Gaussian (normal) mixture models providing a mathematically sound framework for attribute space dimensionality reduction. A significant shortcoming of mixtures of factor analyzers is the vulnerability of normal distributions to outliers. Recently, the replacement of normal distributions with the heaviertailed Student’st distributions has been proposed as a way to mitigate these shortcomings and the treatment of the resulting model under an expectationmaximization (EM) algorithm framework has been conducted. In this paper we develop a Bayesian approach to factor analysis modelling based on Student’st distributions. We derive a tractable variational inference algorithm for this model by expressing the Student’st distributed factor analyzers as a marginalization over additional latent variables. Our innovative approach provides an efficient and more robust alternative to EMbased methods, resolving their singularity and overfitting proneness problems, while allowing for the automatic determination of the optimal model size. We demonstrate the superiority of the proposed model over wellknown covariance modeling techniques in a wide range of signal processing applications. I.
Variational ExpectationMaximization Training For Gaussian Networks
 Proc. IEEE Workshop on Neural Networks for Signal Processing
, 2003
"... This paper introduces variational expectationmaximization (VEM) algorithm for training Gaussian networks. Hyperparameters model distributions of parameters characterizing Gaussian mixture densities. The proposed algorithm employs a hierarchical learning strategy for estimating a set of hyperparamet ..."
Abstract
 Add to MetaCart
This paper introduces variational expectationmaximization (VEM) algorithm for training Gaussian networks. Hyperparameters model distributions of parameters characterizing Gaussian mixture densities. The proposed algorithm employs a hierarchical learning strategy for estimating a set of hyperparameters and the number of Gaussian mixture components. A dual EM algorithm is employed as the initialization stage in the VEMbased learning. In the first stage the EM algorithm is applied on the given data set while the second stage EM is used on distributions of parameters resulted from several runs of the first stage EM. Appropriate maximum loglikelihood estimators are considered for all the parameter distributions involved.
Inequality in Life Spans and Mortality Convergence Across Industrialized Countries
, 2005
"... The second half of the twentieth century witnessed much convergence in life expectancy around the world. Closer inspection of mortality trends in advanced countries reveals that inequality in adult life spans, which we measure with the standard deviation of ages at death above age 10, S10, is increa ..."
Abstract
 Add to MetaCart
The second half of the twentieth century witnessed much convergence in life expectancy around the world. Closer inspection of mortality trends in advanced countries reveals that inequality in adult life spans, which we measure with the standard deviation of ages at death above age 10, S10, is increasingly responsible for the remaining divergence in mortality. We report striking differences in level and trend of S10 across industrialized countries since 1960, which cannot be explained by aggregate socioeconomic inequality or differential externalcause mortality. Rather, S10 reflects both within and betweengroup inequalities in life spans and conveys new information about their combined magnitudes and trends. These findings suggest that the challenge for health policies in this century is to reduce inequality, not just lengthen life. The human condition has improved tremendously during the course of modern development. At the beginning of the nineteenth century, life expectancy at birth, e0, hovered between 25 to 40 years (Maddison, 2001). Industrialization and unprecedented growth in percapita incomes coincided with significant gains in e0, which by 1960 reached roughly 70 years among
LOWER AND UPPER BOUNDS FOR APPROXIMATION OF THE KULLBACKLEIBLER DIVERGENCE BETWEEN GAUSSIAN MIXTURE MODELS
"... Many speech technology systems rely on Gaussian Mixture Models (GMMs). The need for a comparison between two GMMs arises in applications such as speaker verification, model selection or parameter estimation. For this purpose, the KullbackLeibler (KL) divergence is often used. However, since there i ..."
Abstract
 Add to MetaCart
Many speech technology systems rely on Gaussian Mixture Models (GMMs). The need for a comparison between two GMMs arises in applications such as speaker verification, model selection or parameter estimation. For this purpose, the KullbackLeibler (KL) divergence is often used. However, since there is no closed form expression to compute it, it can only be approximated. We propose lower and upper bounds for the KL divergence, which lead to a new approximation and interesting insights into previously proposed approximations. An application to the comparison of speaker models also shows how such approximations can be used to validate assumptions on the models. Index Terms — Gaussian Mixture Model (GMM), KullbackLeibler Divergence, speaker comparison, speech processing.
NeuroImage 59 (2012) 1261–1274 Contents lists available at SciVerse ScienceDirect
"... journal homepage: www.elsevier.com/locate/ynimg ..."
Research Article Iterative Estimation Algorithms Using Conjugate Function Lower Bound and MinorizationMaximization with Applications in Image Denoising
"... A fundamental problem in signal processing is to estimate signal from noisy observations. This is usually formulated as an optimization problem. Optimizations based on variational lower bound and minorizationmaximization have been widely used in machine learning research, signal processing, and sta ..."
Abstract
 Add to MetaCart
A fundamental problem in signal processing is to estimate signal from noisy observations. This is usually formulated as an optimization problem. Optimizations based on variational lower bound and minorizationmaximization have been widely used in machine learning research, signal processing, and statistics. In this paper, we study iterative algorithms based on the conjugate function lower bound (CFLB) and minorizationmaximization (MM) for a class of objective functions. We propose a generalized version of these two algorithms and show that they are equivalent when the objective function is convex and differentiable. We then develop a CFLB/MM algorithm for solving the MAP estimation problems under a linear Gaussian observation model. We modify this algorithm for waveletdomain image denoising. Experimental results show that using a single wavelet representation the performance of the proposed algorithms makes better than that of the bishrinkage algorithm which is arguably one of the best in recent publications. Using complex wavelet representations, the performance of the proposed algorithm is very competitive with that of the stateoftheart algorithms. Copyright © 2008 G. Deng and W.Y. Ng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1.
VARIATIONAL METHODS FOR SPECTRAL UNMIXING OF HYPERSPECTRAL IMAGES
"... This paper studies a variational Bayesian unmixing algorithm for hyperspectral images based on the standard linear mixing model. Each pixel of the image is modeled as a linear combination of endmembers whose corresponding fractions or abundances are estimated by a Bayesian algorithm. This approach r ..."
Abstract
 Add to MetaCart
This paper studies a variational Bayesian unmixing algorithm for hyperspectral images based on the standard linear mixing model. Each pixel of the image is modeled as a linear combination of endmembers whose corresponding fractions or abundances are estimated by a Bayesian algorithm. This approach requires to define prior distributions for the parameters of interest and the related hyperparameters. After defining appropriate priors for the abundances (uniform priors on the interval (0, 1)), the joint posterior distribution of the model parameters and hyperparameters is derived. The complexity of this distribution is handled by using variational methods that allow the joint distribution of the unknown parameters and hyperparameter to be approximated. Simulation results conducted on synthetic and real data show similar performances than those obtained with a previously published unmixing algorithm based on Markov chain Monte Carlo methods, with a significantly reduced computational cost. Index Terms — Bayesian inference, variational methods, spectral unmixing, hyperspectral images.
1 Variational methods for spectral unmixing of hyperspectral images
"... This paper studies a variational Bayesian unmixing algorithm for hyperspectral images based on the standard linear mixing model. Each pixel of the image is modeled as a linear combination of endmembers (assumed to be known in this paper) weighted by corresponding fractions or abundances. These mixtu ..."
Abstract
 Add to MetaCart
This paper studies a variational Bayesian unmixing algorithm for hyperspectral images based on the standard linear mixing model. Each pixel of the image is modeled as a linear combination of endmembers (assumed to be known in this paper) weighted by corresponding fractions or abundances. These mixture coefficients, constrained to be positive and summing to one, are estimated within a Bayesian framework. This approach requires to define prior distributions for the parameters of interest and the related hyperparameters. This paper assumes the abundances are positive and lower than one, thus ignoring the sumtoone constraint. Therefore, the abundance coefficients are independent. The corresponding prior distribution is chosen to ensure these new constraints. The complexity of the joint posterior distribution is handled by using variational methods that allow one to approximate the joint distribution of the unknown parameters and hyperparameter. An iterative algorithm successively updates the parameter estimates. Simulations show similar performance than previous Markov chain Monte Carlo unmixing algorithms but with significantly lower computational cost.
Variational learning for Generalized Associative Functional Networks in modeling
"... journal homepage: www.elsevier.com/locate/ecolinf ..."