Results 1 
9 of
9
Ascentbased Monte Carlo EM
, 2004
"... The EM algorithm is a popular tool for maximizing likelihood functions in the presence of missing data. Unfortunately, EM often requires the evaluation of analytically intractable and highdimensional integrals. The Monte Carlo EM (MCEM) algorithm is the natural extension of EM that employs Monte Ca ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
The EM algorithm is a popular tool for maximizing likelihood functions in the presence of missing data. Unfortunately, EM often requires the evaluation of analytically intractable and highdimensional integrals. The Monte Carlo EM (MCEM) algorithm is the natural extension of EM that employs Monte Carlo methods to estimate the relevant integrals. Typically, a very large Monte Carlo sample size is required to estimate these integrals within an acceptable tolerance when the algorithm is near convergence. Even if this sample size were known at the onset of implementation of MCEM, its use throughout all iterations is wasteful, especially when accurate starting values are not available. We propose a datadriven strategy for controlling Monte Carlo resources in MCEM. The proposed algorithm improves on similar existing methods by: (i) recovering EM’s ascent (i.e., likelihoodincreasing) property with high probability, (ii) being more robust to the impact of user defined inputs, and (iii) handling classical Monte Carlo and Markov chain Monte Carlo within a common framework. Because of (i) we refer to the algorithm as “Ascentbased MCEM”. We apply Ascentbased MCEM to a variety of examples, including one where it is used to dramatically accelerate the convergence of deterministic EM.
The EM Algorithm, Its Stochastic Implementation and Global Optimization: Some Challenges and Opportunities for OR
, 2006
"... The EM algorithm is a very powerful optimization method and has reached popularity in many fields. Unfortunately, EM is only a local optimization method and can get stuck in suboptimal solutions. While more and more contemporary data/model combinations yield more than one optimum, there have been on ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
The EM algorithm is a very powerful optimization method and has reached popularity in many fields. Unfortunately, EM is only a local optimization method and can get stuck in suboptimal solutions. While more and more contemporary data/model combinations yield more than one optimum, there have been only very few attempts at making EM suitable for global optimization. In this paper we review the basic EM algorithm, its properties and challenges and we focus in particular on its stochastic implementation. The stochastic EM implementation promises relief to some of the contemporary data/model challenges and it is particularly wellsuited for a wedding with global optimization ideas since most global optimization paradigms are also based on the principles of stochasticity. We review some of the challenges of the stochastic EM implementation and propose a new algorithm that combines the principles of EM with that of the Genetic Algorithm. While this new algorithm shows some promising results for clustering of an online auction database of functional objects, the primary goal of this work is to bridge a gap between the field of statistics, which is home to extensive research on the EM algorithm, and the field of operations research, in which work on global optimization thrives, and to stir new ideas for joint research between the two.
Crossed Random Effect Models for Multiple Outcomes in a Study of Teratogenesis
"... ... This article proposes the use of a logistic regression model with crossed random effect structures to address all three questions simultaneously. We use the proposed models to analyze data from a study investigating the effects of in utero antiepileptic drug exposure on fetal development. ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
... This article proposes the use of a logistic regression model with crossed random effect structures to address all three questions simultaneously. We use the proposed models to analyze data from a study investigating the effects of in utero antiepileptic drug exposure on fetal development.
Monotonicity properties of the monte carlo em algorithm and connections with simulated likelihood. Available from http://www2.warwick.ac.uk/fac/sci/statistics/crism/research/2007/paper0724. Paper No
, 2007
"... In this note we show that the Monte Carlo EM algorithm, appropriately constructed with importance reweighting, monotonically increases a corresponding simulated likelihood. This is result is formally proved but also intuitively explained by a formulation of the problem using auxiliary variables. ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In this note we show that the Monte Carlo EM algorithm, appropriately constructed with importance reweighting, monotonically increases a corresponding simulated likelihood. This is result is formally proved but also intuitively explained by a formulation of the problem using auxiliary variables.
Bayesian Statistical Methods for Audio and Music Processing
, 2008
"... Bayesian statistical methods provide a formalism for arriving at solutions to various problems faced in audio processing. In real environments, acoustical conditions and sound sources are highly variable, yet audio signals often possess significant statistical structure. There is a great deal of pri ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Bayesian statistical methods provide a formalism for arriving at solutions to various problems faced in audio processing. In real environments, acoustical conditions and sound sources are highly variable, yet audio signals often possess significant statistical structure. There is a great deal of prior knowledge available about why this statistical structure is present. This includes knowledge of the physical mechanisms by which sounds are generated, the cognitive processes by which sounds are perceived and, in the context of music, the abstract mechanisms by which highlevel sound structure is compiled. Bayesian hierarchical techniques provide a natural means for unification of these bodies of prior knowledge, allowing the formulation of highlystructured models for observed audio data and latent processes at various levels of abstraction. They also permit the inclusion of desirable modelling components such as changepoint structures and modelorder specifications. The resulting models exhibit complex statistical structure and in practice, highly adaptive and powerful computational techniques are needed to perform inference. In this chapter, we review some of the statistical models and associated inference methods developed recently for
doi:10.1155/2009/785152 Research Article Bayesian Inference for Nonnegative Matrix Factorisation Models
"... We describe nonnegative matrix factorisation (NMF) with a KullbackLeibler (KL) error measure in a statistical framework, with a hierarchical generative model consisting of an observation and a prior component. Omitting the prior leads to the standard KLNMF algorithms as special cases, where maximu ..."
Abstract
 Add to MetaCart
(Show Context)
We describe nonnegative matrix factorisation (NMF) with a KullbackLeibler (KL) error measure in a statistical framework, with a hierarchical generative model consisting of an observation and a prior component. Omitting the prior leads to the standard KLNMF algorithms as special cases, where maximum likelihood parameter estimation is carried out via the ExpectationMaximisation (EM) algorithm. Starting from this view, we develop full Bayesian inference via variational Bayes or Monte Carlo. Our construction retains conjugacy and enables us to develop more powerful models while retaining attractive features of standard NMF such as monotonic convergence and easy implementation. We illustrate our approach on model order selection and image reconstruction. Copyright © 2009 Ali Taylan Cemgil. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1.
unknown title
, 2004
"... The EM algorithm is a popular tool for maximizing likelihood functions in the presence of missing data. Unfortunately, EM often requires the evaluation of analytically intractable and highdimensional integrals. The Monte Carlo EM (MCEM) algorithm is the natural extension of EM that employs Monte Ca ..."
Abstract
 Add to MetaCart
The EM algorithm is a popular tool for maximizing likelihood functions in the presence of missing data. Unfortunately, EM often requires the evaluation of analytically intractable and highdimensional integrals. The Monte Carlo EM (MCEM) algorithm is the natural extension of EM that employs Monte Carlo methods to estimate the relevant integrals. Typically, a very large Monte Carlo sample size is required to estimate these integrals within an acceptable tolerance when the algorithm is near convergence. Even if this sample size were known at the onset of implementation of MCEM, its use throughout all iterations is wasteful, especially when accurate starting values are not available. We propose a datadriven strategy for controlling Monte Carlo resources in MCEM. The proposed algorithm improves on similar existing methods by: (i) recovering EM’s ascent (i.e., likelihoodincreasing) property with high probability, (ii) being more robust to the impact of user defined inputs, and (iii) handling classical Monte Carlo and Markov chain Monte Carlo within a common framework. Because of (i) we refer to the algorithm as “Ascentbased MCEM”. We apply Ascentbased MCEM to a variety of examples, including one where it is used to dramatically accelerate the convergence of deterministic EM.
maximization
, 2003
"... Summary. The expectation–maximization (EM) algorithm is a popular tool for maximizing likelihood functions in the presence of missing data. Unfortunately, EM often requires the evaluation of analytically intractable and high dimensional integrals. The Monte Carlo EM (MCEM) algorithm is the natural ..."
Abstract
 Add to MetaCart
Summary. The expectation–maximization (EM) algorithm is a popular tool for maximizing likelihood functions in the presence of missing data. Unfortunately, EM often requires the evaluation of analytically intractable and high dimensional integrals. The Monte Carlo EM (MCEM) algorithm is the natural extension of EM that employs Monte Carlo methods to estimate the relevant integrals.Typically, a very large Monte Carlo sample size is required to estimate these integrals within an acceptable tolerance when the algorithm is near convergence. Even if this sample size were known at the onset of implementation of MCEM, its use throughout all iterations is wasteful, especially when accurate starting values are not available. We propose a datadriven strategy for controlling Monte Carlo resources in MCEM. The algorithm proposed improves on similar existing methods by recovering EM’s ascent (i.e. likelihood increasing) property with high probability, being more robust to the effect of userdefined inputs and handling classical Monte Carlo and Markov chain Monte Carlo methods within a common framework. Because of the first of these properties we refer to the algorithm as ‘ascentbased MCEM’. We apply ascentbased MCEM to a variety of examples, including one where it is used to accelerate the convergence of deterministic EM dramatically.
entitled “Probabilistic Modelling of Musical Audio for Machine Listening”).Bayesian Inference for Nonnegative Matrix Factorisation Models ∗
, 2008
"... We describe nonnegative matrix factorisation (NMF) with a KullbackLeibler error measure in a statistical framework, with a hierarchical generative model consisting of an observation and a prior component. Omitting the prior leads to standard NMF algorithms as special cases, where maximum likelihoo ..."
Abstract
 Add to MetaCart
(Show Context)
We describe nonnegative matrix factorisation (NMF) with a KullbackLeibler error measure in a statistical framework, with a hierarchical generative model consisting of an observation and a prior component. Omitting the prior leads to standard NMF algorithms as special cases, where maximum likelihood parameter estimation is carried out via the ExpectationMaximisation (EM) algorithm. Starting from this view, we develop Bayesian extensions that facilitate more powerful modelling and allow full Bayesian inference via variational Bayes or Monte Carlo. Our construction retains conjugacy and enables us to develop models that fit better to real data while retaining attractive features of standard NMF such as fast convergence and easy implementation. We illustrate our approach on model order selection and image reconstruction. 1