Results 1  10
of
783,557
A View Of The Em Algorithm That Justifies Incremental, Sparse, And Other Variants
 Learning in Graphical Models
, 1998
"... . The EM algorithm performs maximum likelihood estimation for data in which some variables are unobserved. We present a function that resembles negative free energy and show that the M step maximizes this function with respect to the model parameters and the E step maximizes it with respect to the d ..."
Abstract

Cited by 988 (18 self)
 Add to MetaCart
. The EM algorithm performs maximum likelihood estimation for data in which some variables are unobserved. We present a function that resembles negative free energy and show that the M step maximizes this function with respect to the model parameters and the E step maximizes it with respect
Text Classification from Labeled and Unlabeled Documents using EM
 MACHINE LEARNING
, 1999
"... This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large qua ..."
Abstract

Cited by 1033 (19 self)
 Add to MetaCart
quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of ExpectationMaximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents
A gentle tutorial on the EM algorithm and its application to parameter estimation for gaussian mixture and hidden markov models
, 1997
"... We describe the maximumlikelihood parameter estimation problem and how the Expectationform of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) fi ..."
Abstract

Cited by 678 (4 self)
 Add to MetaCart
We describe the maximumlikelihood parameter estimation problem and how the Expectationform of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2
Segmentation of brain MR images through a hidden Markov random field model and the expectationmaximization algorithm
 IEEE TRANSACTIONS ON MEDICAL. IMAGING
, 2001
"... The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogrambased model, the FM has an intrinsic limi ..."
Abstract

Cited by 619 (14 self)
 Add to MetaCart
methods are limited to using MRF as a general prior in an FM modelbased approach. To fit the HMRF model, an EM algorithm is used. We show that by incorporating both the HMRF model and the EM algorithm into a HMRFEM framework, an accurate and robust segmentation can be achieved. More importantly
Probabilistic Principal Component Analysis
 Journal of the Royal Statistical Society, Series B
, 1999
"... Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axes of a set of observed data vectors may be determined through maximumlikelihood estimation of paramet ..."
Abstract

Cited by 703 (5 self)
 Add to MetaCart
of parameters in a latent variable model closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss, with illustrative examples, the advantages conveyed by this probabilistic approach
Mixtures of Probabilistic Principal Component Analysers
, 1998
"... Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a com ..."
Abstract

Cited by 537 (6 self)
 Add to MetaCart
maximumlikelihood framework, based on a specific form of Gaussian latent variable model. This leads to a welldefined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context
Probabilistic Latent Semantic Analysis
 In Proc. of Uncertainty in Artificial Intelligence, UAI’99
, 1999
"... Probabilistic Latent Semantic Analysis is a novel statistical technique for the analysis of twomode and cooccurrence data, which has applications in information retrieval and filtering, natural language processing, machine learning from text, and in related areas. Compared to standard Latent Sema ..."
Abstract

Cited by 760 (9 self)
 Add to MetaCart
to avoid overfitting, we propose a widely applicable generalization of maximum likelihood model fitting by tempered EM. Our approach yields substantial and consistent improvements over Latent Semantic Analysis in a number of experiments.
The geometry of graphs and some of its algorithmic applications
 Combinatorica
, 1995
"... In this paper we explore some implications of viewing graphs as geometric objects. This approach offers a new perspective on a number of graphtheoretic and algorithmic problems. There are several ways to model graphs geometrically and our main concern here is with geometric representations that r ..."
Abstract

Cited by 543 (20 self)
 Add to MetaCart
that respect the metric of the (possibly weighted) graph. Given a graph G we map its vertices to a normed space in an attempt to (i) Keep down the dimension of the host space and (ii) Guarantee a small distortion, i.e., make sure that distances between vertices in G closely match the distances between
Measuring individual differences in implicit cognition: The implicit association test
 J PERSONALITY SOCIAL PSYCHOL 74:1464–1480
, 1998
"... An implicit association test (IAT) measures differential association of 2 target concepts with an attribute. The 2 concepts appear in a 2choice task (e.g., flower vs. insect names), and the attribute in a 2nd task (e.g., pleasant vs. unpleasant words for an evaluation attribute). When instructions ..."
Abstract

Cited by 937 (63 self)
 Add to MetaCart
An implicit association test (IAT) measures differential association of 2 target concepts with an attribute. The 2 concepts appear in a 2choice task (e.g., flower vs. insect names), and the attribute in a 2nd task (e.g., pleasant vs. unpleasant words for an evaluation attribute). When instructions
Selfdiscrepancy: A theory relating self and affect
 Psychological Review
, 1987
"... This article presents a theory of how different types of discrepancies between selfstate representations are related to different kinds of emotional vulnerabilities. One domain of the self (actual; ideal; ought) and one standpoint on the self (own; significant other) constitute each type of selfs ..."
Abstract

Cited by 567 (7 self)
 Add to MetaCart
state representation. It is proposed that different types of selfdiscrepancies represent different types of negative psychological situations that are associated with different kinds of discomfort. Discrepancies between the actual/own selfstate (i.e., the selfconcept) and ideal selfstales (i.e., representations
Results 1  10
of
783,557