Results 1 
4 of
4
Dealing with label switching in mixture models
 Journal of the Royal Statistical Society, Series B
, 2000
"... In a Bayesian analysis of finite mixture models, parameter estimation and clustering are sometimes less straightforward that might be expected. In particular, the common practice of estimating parameters by their posterior mean, and summarising joint posterior distributions by marginal distributions ..."
Abstract

Cited by 109 (0 self)
 Add to MetaCart
In a Bayesian analysis of finite mixture models, parameter estimation and clustering are sometimes less straightforward that might be expected. In particular, the common practice of estimating parameters by their posterior mean, and summarising joint posterior distributions by marginal distributions, often leads to nonsensical answers. This is due to the socalled “labelswitching” problem, which is caused by symmetry in the likelihood of the model parameters. A frequent response to this problem is to remove the symmetry using artificial identifiability constraints. We demonstrate that this fails in general to solve the problem, and describe an alternative class of approaches, relabelling algorithms, which arise from attempting to minimise the posterior expected loss under a class of loss functions. We describe in detail one particularly simple and general relabelling algorithm, and illustrate its success in dealing with the labelswitching problem on two examples.
Bayesian Methods for Hidden Markov Models  Recursive Computing in the 21st Century
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 2002
"... Markov chain Monte Carlo (MCMC) sampling strategies can be used to simulate hidden Markov model (HMM) parameters from their posterior distribution given observed data. Some MCMC methods (for computing likelihood, conditional probabilities of hidden states, and the most likely sequence of states) use ..."
Abstract

Cited by 86 (8 self)
 Add to MetaCart
Markov chain Monte Carlo (MCMC) sampling strategies can be used to simulate hidden Markov model (HMM) parameters from their posterior distribution given observed data. Some MCMC methods (for computing likelihood, conditional probabilities of hidden states, and the most likely sequence of states) used in practice can be improved by incorporating established recursive algorithms. The most important is a set of forwardbackward recursions calculating conditional distributions of the hidden states given observed data and model parameters. We show how to use the recursive algorithms in an MCMC context and demonstrate mathematical and empirical results showing a Gibbs sampler using the forwardbackward recursions mixes more rapidly than another sampler often used for HMM's. We introduce an augmented variables technique for obtaining unique state labels in HMM's and finite mixture models. We show how recursive computing allows statistically efficient use of MCMC output when estimating the hidden states. We directly calculate the posterior distribution of the hidden chain's state space size by MCMC, circumventing asymptotic arguments underlying the Bayesian information criterion, which is shown to be inappropriate for a frequently analyzed data set in the HMM literature. The use of loglikelihood for assessing MCMC convergence is illustrated, and posterior predictive checks are used to investigate application specific questions of model adequacy.
Dealing With Multimodal Posteriors and NonIdentifiability in Mixture Models
, 1999
"... In a Bayesian analysis of finite mixture models, the lack of identifiability of the parameters often leads to a posterior distribution which is highly multimodal and symmetric, making it difficult to interpret or summarize. A common approach to this problem is to make the parameters identifiable by ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In a Bayesian analysis of finite mixture models, the lack of identifiability of the parameters often leads to a posterior distribution which is highly multimodal and symmetric, making it difficult to interpret or summarize. A common approach to this problem is to make the parameters identifiable by imposing artificial constraints. We demonstrate that this may fail to solve the problem, and describe and illustrate an alternative solution which involves postprocessing the results of a Markov Chain Monte Carlo (MCMC) scheme. Our method can be viewed either as a method of searching for a reasonable summary of the posterior distribution, or as a method of revising the prior distribution. KEYWORDS: Bayesian, Classification, Clustering, Identifiability, MCMC, Mixture model, Multimodal posterior 1 Introduction In this paper we consider problems which arise when taking a Bayesian approach to classification and clustering using mixture models. We consider the setting where we have observation...
Label Switch in Mixture Model and Relabeling Algorithm Project for Reading Course Prepared by: Fanfu Xie, ZhengFei ChenLabel Switch in Mixture Model and Relabeling Algorithm
"... When MCMC is used to perform Bayesian analysis for mixture models, the socall label switch problem affects the clustering analysis. If the problem is not handled properly, the ergodic average of the MCMC samples is not appropriate for the estimation of the parameters. In this paper, we will review t ..."
Abstract
 Add to MetaCart
When MCMC is used to perform Bayesian analysis for mixture models, the socall label switch problem affects the clustering analysis. If the problem is not handled properly, the ergodic average of the MCMC samples is not appropriate for the estimation of the parameters. In this paper, we will review the Label Switch problem in mixture model, discuss and implement the relabelling algorithm suggested by Stephens. To illustrate the problem, we apply data augmentation Gaussian Mixture Model to Galaxy data with different number of components.