## A tutorial on MM algorithms (2004)

Venue: | Amer. Statist |

Citations: | 65 - 3 self |

### BibTeX

@ARTICLE{Hunter04atutorial,

author = {David R. Hunter and Kenneth Lange and Departments Of Biomathematics and Human Genetics},

title = {A tutorial on MM algorithms},

journal = {Amer. Statist},

year = {2004},

pages = {30--37}

}

### Years of Citing Articles

### OpenURL

### Abstract

Most problems in frequentist statistics involve optimization of a function such as a likelihood or a sum of squares. EM algorithms are among the most effective algorithms for maximum likelihood estimation because they consistently drive the likelihood uphill by maximizing a simple surrogate function for the loglikelihood. Iterative optimization of a surrogate function as exemplified by an EM algorithm does not necessarily require missing data. Indeed, every EM algorithm is a special case of the more general class of MM optimization algorithms, which typically exploit convexity rather than missing data in majorizing or minorizing an objective function. In our opinion, MM algorithms deserve to part of the standard toolkit of professional statisticians. The current article explains the principle behind MM algorithms, suggests some methods for constructing them, and discusses some of their attractive features. We include numerous examples throughout the article to illustrate the concepts described. In addition to surveying previous work on MM algorithms, this article introduces some new material on constrained optimization and standard error estimation. Key words and phrases: constrained optimization, EM algorithm, majorization, minorization, Newton-Raphson 1 1