Results 1 
7 of
7
SpaceAlternating Generalized ExpectationMaximization Algorithm
 IEEE Trans. Signal Processing
, 1994
"... The expectationmaximization (EM) method can facilitate maximizing likelihood functions that arise in statistical estimation problems. In the classical EM paradigm, one iteratively maximizes the conditional loglikelihood of a single unobservable complete data space, rather than maximizing the intra ..."
Abstract

Cited by 193 (28 self)
 Add to MetaCart
(Show Context)
The expectationmaximization (EM) method can facilitate maximizing likelihood functions that arise in statistical estimation problems. In the classical EM paradigm, one iteratively maximizes the conditional loglikelihood of a single unobservable complete data space, rather than maximizing the intractable likelihood function for the measured or incomplete data. EM algorithms update all parameters simultaneously, which has two drawbacks: 1) slow convergence, and 2) difficult maximization steps due to coupling when smoothness penalties are used. This paper describes the spacealternating generalized EM (SAGE) method, which updates the parameters sequentially by alternating between several small hiddendata spaces defined by the algorithm designer. We prove that the sequence of estimates monotonically increases the penalizedlikelihood objective, we derive asymptotic convergence rates, and we provide sufficient conditions for monotone convergence in norm. Two signal processing applicatio...
Penalized MaximumLikelihood Image Reconstruction using SpaceAlternating Generalized EM Algorithms
 IEEE Tr. Im. Proc
, 1995
"... Most expectationmaximization (EM) type algorithms for penalized maximumlikelihood image reconstruction converge slowly, particularly when one incorporates additive background effects such as scatter, random coincidences, dark current, or cosmic radiation. In addition, regularizing smoothness penal ..."
Abstract

Cited by 102 (31 self)
 Add to MetaCart
(Show Context)
Most expectationmaximization (EM) type algorithms for penalized maximumlikelihood image reconstruction converge slowly, particularly when one incorporates additive background effects such as scatter, random coincidences, dark current, or cosmic radiation. In addition, regularizing smoothness penalties (or priors) introduce parameter coupling, rendering intractable the Msteps of most EMtype algorithms. This paper presents spacealternating generalized EM (SAGE) algorithms for image reconstruction, which update the parameters sequentially using a sequence of small "hidden" data spaces, rather than simultaneously using one large completedata space. The sequential update decouples the Mstep, so the maximization can typically be performed analytically. We introduce new hiddendata spaces that are less informative than the conventional completedata space for Poisson data and that yield significant improvements in convergence rate. This acceleration is due to statistical considerations, not numerical overrelaxation methods, so monotonic increases in the objective function are guaranteed. We provide a general global convergence proof for SAGE methods with nonnegativity constraints.
Penalized Maximum Likelihood Estimator for Normal Mixtures
, 2000
"... The estimation of the parameters of a mixture of Gaussian densities is considered, within the framework of maximum likelihood. Due to unboundedness of the likelihood function, the maximum likelihood estimator fails to exist. We adopt a solution to likelihood function degeneracy which consists in pen ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
The estimation of the parameters of a mixture of Gaussian densities is considered, within the framework of maximum likelihood. Due to unboundedness of the likelihood function, the maximum likelihood estimator fails to exist. We adopt a solution to likelihood function degeneracy which consists in penalizing the likelihood function. The resulting penalized likelihood function is then bounded over the parameter space and the existence of the penalized maximum likelihood estimator is granted. As original contribution we provide asymptotic properties, and in particular a consistency proof, for the penalized maximum likelihood estimator. Numerical examples are provided in the finite data case, showing the performances of the penalized estimator compared to the standard one.
SpaceAlternating Generalized EM Algorithms For Penalized MaximumLikelihood Image Reconstruction
, 1994
"... Most expectationmaximization (EM) type algorithms for penalized maximumlikelihood image reconstruction converge particularly slowly when one incorporates additive background effects such as scatter, random coincidences, dark current, or cosmic radiation. In addition, regularizing smoothness penalt ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
Most expectationmaximization (EM) type algorithms for penalized maximumlikelihood image reconstruction converge particularly slowly when one incorporates additive background effects such as scatter, random coincidences, dark current, or cosmic radiation. In addition, regularizing smoothness penalties (or priors) introduce parameter coupling, rendering intractable the Msteps of most EMtype algorithms. This report presents spacealternating generalized EM (SAGE) algorithms for image reconstruction, which update the parameters sequentially using a sequence of small "hidden" data spaces, rather than simultaneously using one large completedata space. The sequential update decouples the Mstep, so the maximization can typically be performed analytically. We introduce new hiddendata spaces that are less informative than the conventional completedata space for Poisson data and that yield significant improvements in convergence rate. This acceleration is due to statistical considerations...
Penalized Maximum Likelihood Estimation for Normal Mixture Distributions
 the Scandinavian Journal of Statistics
, 2002
"... Mixture models form the essential basis of data clustering within a statistical framework. Here, the estimation of the parameters of a mixture of Gaussian densities is considered. In this particular context, it is well known that the maximum likelihood approach is statistically ill posed, i.e. the l ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Mixture models form the essential basis of data clustering within a statistical framework. Here, the estimation of the parameters of a mixture of Gaussian densities is considered. In this particular context, it is well known that the maximum likelihood approach is statistically ill posed, i.e. the likelihood function is not bounded above, because of singularities at the boundary of the parameter domain. We show that such a degeneracy can be avoided by penalizing the likelihood function using a suited type of penalty function.
SpaceAlternating Generalized ExpectationMaximization Algorithm
"... AbstractThe expectationmaximization (EM) method can facilitate maximizing likelihood functions that arise in statistical estimation problems. In the classical EM paradigm, one iteratively maximizes the conditional loglikelihood of a single unobservable complete data space, rather than maximizing ..."
Abstract
 Add to MetaCart
AbstractThe expectationmaximization (EM) method can facilitate maximizing likelihood functions that arise in statistical estimation problems. In the classical EM paradigm, one iteratively maximizes the conditional loglikelihood of a single unobservable complete data space, rather than maximizing the intractable likelihood function for the measured or incomplete data. EM algorithms update all parameters simultaneously, which has two drawbacks: 1) slow convergence, and 2) difficult maximization steps due to coupling when smoothness penalties are used. This paper describes the spacealternating generalized EM (SAGE) method, which updates the parameters sequentially by alternating between several small hiddendata spaces defined by the algorithm designer. We prove that the sequence of estimates monotonically increases the penalizedlikelihood objective, we derive asymptotic convergence rates, and we provide sufficient conditions for monotone convergence in norm. Two signal processing applications illustrate the method: estimation of superimposed signals in Gaussian noise, and image reconstruction from Poisson measurements. In both applications, our SAGE algorithms easily accommodate smoothness penalties and converge faster than the EM algorithms.
Speeding Up Iterative Ontology Alignment using BlockCoordinate Descent
"... In domains such as biomedicine, ontologies are prominently utilized for annotating data. Consequently, aligning ontologies facilitates integrating data. Several algorithms exist for automatically aligning ontologies with diverse levels of performance. As alignment applications evolve and exhibit o ..."
Abstract
 Add to MetaCart
(Show Context)
In domains such as biomedicine, ontologies are prominently utilized for annotating data. Consequently, aligning ontologies facilitates integrating data. Several algorithms exist for automatically aligning ontologies with diverse levels of performance. As alignment applications evolve and exhibit online run time constraints, performing the alignment in a reasonable amount of time without compromising the quality of the alignment is a crucial challenge. A large class of alignment algorithms is iterative and often consumes more time than others in delivering solutions of high quality. We present a novel and general approach for speeding up the multivariable optimization process utilized by these algorithms. Specifically, we use the technique of blockcoordinate descent (BCD), which exploits the subdimensions of the alignment problem identified using a partitioning scheme. We integrate this approach into multiple wellknown alignment algorithms and show that the enhanced algorithms generate similar or improved alignments in significantly less time on a comprehensive testbed of ontology pairs. Because BCD does not overly constrain how we partition or order the parts, we vary the partitioning and ordering schemes in order to empirically determine the best schemes for each of the selected algorithms. As biomedicine represents a key application domain for ontologies, we introduce a comprehensive biomedical ontology testbed for the community in order to evaluate alignment algorithms. Because biomedical ontologies tend to be large, default iterative techniques find it difficult to produce a good quality alignment within a reasonable amount of time. We align a significant number of ontology pairs from this testbed using BCDenhanced algorithms. Our contributions represent an important step toward making a significant class of alignment techniques computationally feasible. 1.