Results 1  10
of
22
A tutorial on MM algorithms
 Amer. Statist
, 2004
"... Most problems in frequentist statistics involve optimization of a function such as a likelihood or a sum of squares. EM algorithms are among the most effective algorithms for maximum likelihood estimation because they consistently drive the likelihood uphill by maximizing a simple surrogate function ..."
Abstract

Cited by 122 (4 self)
 Add to MetaCart
Most problems in frequentist statistics involve optimization of a function such as a likelihood or a sum of squares. EM algorithms are among the most effective algorithms for maximum likelihood estimation because they consistently drive the likelihood uphill by maximizing a simple surrogate function for the loglikelihood. Iterative optimization of a surrogate function as exemplified by an EM algorithm does not necessarily require missing data. Indeed, every EM algorithm is a special case of the more general class of MM optimization algorithms, which typically exploit convexity rather than missing data in majorizing or minorizing an objective function. In our opinion, MM algorithms deserve to part of the standard toolkit of professional statisticians. The current article explains the principle behind MM algorithms, suggests some methods for constructing them, and discusses some of their attractive features. We include numerous examples throughout the article to illustrate the concepts described. In addition to surveying previous work on MM algorithms, this article introduces some new material on constrained optimization and standard error estimation. Key words and phrases: constrained optimization, EM algorithm, majorization, minorization, NewtonRaphson 1 1
Statistical inference for discretely observed Markov jump processes
 Journal of the Royal Statistical Society: Series B (Statistical Methodology
"... Likelihood inference for discretely observed Markov jump processes with finite state space is investigated. The existence and uniqueness of the maximum likelihood estimator of the intensity matrix are investigated. This topic is closely related to the imbedding problem for Markov chains. It is demon ..."
Abstract

Cited by 28 (4 self)
 Add to MetaCart
Likelihood inference for discretely observed Markov jump processes with finite state space is investigated. The existence and uniqueness of the maximum likelihood estimator of the intensity matrix are investigated. This topic is closely related to the imbedding problem for Markov chains. It is demonstrated that the maximum likelihood estimator can be found either by the EMalgorithm or by a Markov chain Monte Carlo procedure. When the maximum likelihood estimator does not exist, an estimator can be obtained by using a penalized likelihood function or by the MCMCprocedure with a suitable prior. The theory is illustrated by a simulation study.
A survey of Monte Carlo algorithms for maximizing the likelihood of a twostage hierarchical model
, 2001
"... Likelihood inference with hierarchical models is often complicated by the fact that the likelihood function involves intractable integrals. Numerical integration (e.g. quadrature) is an option if the dimension of the integral is low but quickly becomes unreliable as the dimension grows. An alternati ..."
Abstract

Cited by 17 (9 self)
 Add to MetaCart
Likelihood inference with hierarchical models is often complicated by the fact that the likelihood function involves intractable integrals. Numerical integration (e.g. quadrature) is an option if the dimension of the integral is low but quickly becomes unreliable as the dimension grows. An alternative approach is to approximate the intractable integrals using Monte Carlo averages. Several dierent algorithms based on this idea have been proposed. In this paper we discuss the relative merits of simulated maximum likelihood, Monte Carlo EM, Monte Carlo NewtonRaphson and stochastic approximation. Key words and phrases : Eciency, Monte Carlo EM, Monte Carlo NewtonRaphson, Rate of convergence, Simulated maximum likelihood, Stochastic approximation All three authors partially supported by NSF Grant DMS0072827. 1 1
Estimation of inferential uncertainty in assessing expert segmentation performance from STAPLE
 IEEE Trans. Med. Imag
, 2010
"... Abstract. The evaluation of the quality of segmentations of an image, and the assessment of intra and interexpert variability in segmentation performance, has long been recognized as a difficult task. Recently an Expectation Maximization (EM) algorithm for Simultaneous Truth and Performance Level ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Abstract. The evaluation of the quality of segmentations of an image, and the assessment of intra and interexpert variability in segmentation performance, has long been recognized as a difficult task. Recently an Expectation Maximization (EM) algorithm for Simultaneous Truth and Performance Level Estimation (Staple), was developed to compute both an estimate of the reference standard segmentation and performance parameters from a set of segmentations of an image. The performance is characterized by the rate of detection of each segmentation label by each expert in comparison to the estimated reference standard. This previous work provides estimates of performance parameters, but does not provide any information regarding their uncertainty. An estimate of this inferential uncertainty, if available, would allow estimation of confidence intervals for the values of the parameters, aid in the interpretation of the performance of segmentation generators, and help determine if sufficient data size and number of segmentations have been
Efficient estimation of transition rates between credit ratings from observations
"... at discrete time points ..."
MUS81 Generates a Subset of MLH1MLH3–Independent Crossovers in Mammalian Meiosis
, 2008
"... Two eukaryotic pathways for processing doublestrand breaks (DSBs) as crossovers have been described, one dependent on the MutL homologs Mlh1 and Mlh3, and the other on the structurespecific endonuclease Mus81. Mammalian MUS81 has been implicated in maintenance of genomic stability in somatic cells ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Two eukaryotic pathways for processing doublestrand breaks (DSBs) as crossovers have been described, one dependent on the MutL homologs Mlh1 and Mlh3, and the other on the structurespecific endonuclease Mus81. Mammalian MUS81 has been implicated in maintenance of genomic stability in somatic cells; however, little is known about its role during meiosis. Mus81deficient mice were originally reported as being viable and fertile, with normal meiotic progression; however, a more detailed examination of meiotic progression in Mus81null animals and WT controls reveals significant meiotic defects in the mutants. These include smaller testis size, a depletion of mature epididymal sperm, significantly upregulated accumulation of MLH1 on chromosomes from pachytene meiocytes in an interferenceindependent fashion, and a subset of meiotic DSBs that fail to be repaired. Interestingly, chiasmata numbers in spermatocytes from Mus81 2/2 animals are normal, suggesting additional integrated mechanisms controlling the two distinct crossover pathways. This study is the first indepth analysis of meiotic progression in Mus81nullizygous mice, and our results implicate the MUS81 pathway as a regulator of crossover
Regressograms and meancovariance models for incomplete longitudinal data, American Statistician p. Revision Submitted
, 2011
"... Longitudinal studies are prevalent in biological and social sciences where subjects are measured repeatedly over time. Modeling the correlations and handling missing data are among the most challenging problems in analyzing such data. There are various methods for handling missing data, but databas ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Longitudinal studies are prevalent in biological and social sciences where subjects are measured repeatedly over time. Modeling the correlations and handling missing data are among the most challenging problems in analyzing such data. There are various methods for handling missing data, but databased and graphical methods for modeling the covariance matrix of longitudinal data are relatively new. We adopt an approach based on the modified Cholesky decomposition of the covariance matrix which handles both challenges simultaneously. It amounts to formulating parametric models for the regression coefficients of the conditional mean and variance of each measurement given its predecessors. We demonstrate the roles of profile plots and regressograms in formulating joint meancovariance models for (in)complete longitudinal data. Applying these graphical tools to a casestudy of Fruit Fly Mortality data which has 22 % missing values reveals an “Sshape ” or logistic curve for the mean function and cubic polynomial models for the two factors of the modified Cholesky decomposition of the sample covariance matrix. A likelihoodbased method for estimating the parameters of the mean and covariance models using the EM algorithm is proposed. A simulation study and application to real data demonstrate that the estimation method works well for incomplete longitudinal data.
Modular OpenSource Software for Item Factor Analysis
"... This paper introduces an Item Factor Analysis (IFA) module for OpenMx, a free, opensource, and modular statistical modeling package that runs within the R programming environment on GNU/Linux, Mac OS X, and Microsoft Windows. The IFA module offers a novel model specification language that is well s ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
This paper introduces an Item Factor Analysis (IFA) module for OpenMx, a free, opensource, and modular statistical modeling package that runs within the R programming environment on GNU/Linux, Mac OS X, and Microsoft Windows. The IFA module offers a novel model specification language that is well suited to programmatic generation and manipulation of models. Modular organization of the source code facilitates the easy addition of item models, item parameter estimation algorithms, optimizers, test scoring algorithms, and fit diagnostics all within an integrated framework. Three short example scripts are presented for fitting item parameters, latent distribution parameters, and a multiple group model. The availability of both IFA and structural equation modeling in the same software is a step toward the unification of these two methodologies.
Grouped mixed proportional hazards models with spatial dependence.” Working paper, department of economics
, 2005
"... In this paper we develop mixed proportional hazard models in discrete time when there is cross sectional duration dependence because of the social interactions. To capture the cross sectional dependence, we use the lagged binary indicators of neighbors, which are weighted by the spatial weight matr ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this paper we develop mixed proportional hazard models in discrete time when there is cross sectional duration dependence because of the social interactions. To capture the cross sectional dependence, we use the lagged binary indicators of neighbors, which are weighted by the spatial weight matrix. We investigate the nonparametric specification for the baseline hazard and the unobserved heterogeneity. We use EM algorithm to estimate the duration model with unobserved heterogeneity and derive the observed information matrix for statistical inference. Key words and phrases: spatial dependence, social interactions, grouped duration, proportional hazard, unobserved heterogeneity, EM algorithm, observed information.