Results 1  10
of
1,556
Accelerating expectationmaximization algorithms with frequent updates
 in CLUSTER. IEEE, 2012
"... Abstract—Expectation Maximization is a popular approach for parameter estimation in many applications such as image understanding, document classification, or genome data analysis. Despite the popularity of EM algorithms, it is challenging to efficiently implement these algorithms in a distributed e ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Abstract—Expectation Maximization is a popular approach for parameter estimation in many applications such as image understanding, document classification, or genome data analysis. Despite the popularity of EM algorithms, it is challenging to efficiently implement these algorithms in a distributed
An expectationmaximization algorithm for analysis of evolution of exonintron structure of eukaryotic genes
 Lec. Notes in Comput. Sci
, 2005
"... Abstract. We propose a detailed model of evolution of exonintron structure of eukaryotic genes that takes into account genespecific intron gain and loss rates, branchspecific gain and loss coefficients, invariant sites incapable of intron gain, and rate variability of both gain and loss which is ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
is gammadistributed across sites. We develop an expectationmaximization algorithm to estimate the parameters of this model, and study its performance using simulated data. 1
Maximumentropy expectationmaximization algorithm for image processing and sensor networks
 in Proc. SPIE Electronic Imaging: Science and Technology Conf. Visual Communications and Image Processing
, 2007
"... Abstract—In this paper, we propose a maximumentropy expectationmaximization (MEEM) algorithm. We use the proposed algorithm for density estimation. The maximumentropy constraint is imposed for smoothness of the estimated density function. The derivation of the MEEM algorithm requires determinatio ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract—In this paper, we propose a maximumentropy expectationmaximization (MEEM) algorithm. We use the proposed algorithm for density estimation. The maximumentropy constraint is imposed for smoothness of the estimated density function. The derivation of the MEEM algorithm requires
Segmentation of brain MR images through a hidden Markov random field model and the expectationmaximization algorithm
 IEEE TRANSACTIONS ON MEDICAL. IMAGING
, 2001
"... The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogrambased model, the FM has an intrinsic limi ..."
Abstract

Cited by 639 (15 self)
 Add to MetaCart
methods are limited to using MRF as a general prior in an FM modelbased approach. To fit the HMRF model, an EM algorithm is used. We show that by incorporating both the HMRF model and the EM algorithm into a HMRFEM framework, an accurate and robust segmentation can be achieved. More importantly
Blobworld: Image segmentation using ExpectationMaximization and its application to image querying
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1999
"... Retrieving images from large and varied collections using image content as a key is a challenging and important problem. We present a new image representation which provides a transformation from the raw pixel data to a small set of image regions which are coherent in color and texture. This "B ..."
Abstract

Cited by 438 (10 self)
 Add to MetaCart
;Blobworld" representation is created by clustering pixels in a joint colortextureposition feature space. The segmentation algorithm is fully automatic and has been run on a collection of 10,000 natural images. We describe a system that uses the Blobworld representation to retrieve images from this collection
Hierarchical mixtures of experts and the EM algorithm
, 1993
"... We present a treestructured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM’s). Learning is treated as a maximum likelihood ..."
Abstract

Cited by 885 (21 self)
 Add to MetaCart
problem; in particular, we present an ExpectationMaximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an online learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain.
Algorithms for Nonnegative Matrix Factorization
 In NIPS
, 2001
"... Nonnegative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minim ..."
Abstract

Cited by 1246 (5 self)
 Add to MetaCart
to minimize the conventional least squares error while the other minimizes the generalized KullbackLeibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the ExpectationMaximization algorithm
A View Of The Em Algorithm That Justifies Incremental, Sparse, And Other Variants
 Learning in Graphical Models
, 1998
"... . The EM algorithm performs maximum likelihood estimation for data in which some variables are unobserved. We present a function that resembles negative free energy and show that the M step maximizes this function with respect to the model parameters and the E step maximizes it with respect to the d ..."
Abstract

Cited by 993 (18 self)
 Add to MetaCart
estimation problem. A variant of the algorithm that exploits sparse conditional distributions is also described, and a wide range of other variant algorithms are also seen to be possible. 1. Introduction The ExpectationMaximization (EM) algorithm finds maximum likelihood parameter estimates in problems
Text Classification from Labeled and Unlabeled Documents using EM
 MACHINE LEARNING
, 1999
"... This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large qua ..."
Abstract

Cited by 1033 (15 self)
 Add to MetaCart
quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of ExpectationMaximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents
SpaceAlternating Generalized ExpectationMaximization Algorithm
 IEEE Trans. Signal Processing
, 1994
"... The expectationmaximization (EM) method can facilitate maximizing likelihood functions that arise in statistical estimation problems. In the classical EM paradigm, one iteratively maximizes the conditional loglikelihood of a single unobservable complete data space, rather than maximizing the intra ..."
Abstract

Cited by 193 (28 self)
 Add to MetaCart
The expectationmaximization (EM) method can facilitate maximizing likelihood functions that arise in statistical estimation problems. In the classical EM paradigm, one iteratively maximizes the conditional loglikelihood of a single unobservable complete data space, rather than maximizing
Results 1  10
of
1,556