Results 1 
4 of
4
MM algorithms for generalized BradleyTerry models
 The Annals of Statistics
, 2004
"... The Bradley–Terry model for paired comparisons is a simple and muchstudied means to describe the probabilities of the possible outcomes when individuals are judged against one another in pairs. Among the many studies of the model in the past 75 years, numerous authors have generalized it in several ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
The Bradley–Terry model for paired comparisons is a simple and muchstudied means to describe the probabilities of the possible outcomes when individuals are judged against one another in pairs. Among the many studies of the model in the past 75 years, numerous authors have generalized it in several directions, sometimes providing iterative algorithms for obtaining maximum likelihood estimates for the generalizations. Building on a theory of algorithms known by the initials MM, for minorization–maximization, this paper presents a powerful technique for producing iterative maximum likelihood estimation algorithms for a wide class of generalizations of the Bradley–Terry model. While algorithms for problems of this type have tended to be custombuilt in the literature, the techniques in this paper enable their mass production. Simple conditions are stated that guarantee that each algorithm described will produce a sequence that converges to the unique maximum likelihood estimator. Several of the algorithms and convergence results herein are new. 1. Introduction. In
Quantile regression via an MM algorithm
 J. Comput. Graphical Stat
, 2000
"... Quantile regression is an increasingly popular method for estimating the quantiles of a distribution conditional on the values of covariates. Regression quantiles are robust against the influence of outliers, and taken several at a time, they give a more complete picture of the conditional distribut ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Quantile regression is an increasingly popular method for estimating the quantiles of a distribution conditional on the values of covariates. Regression quantiles are robust against the influence of outliers, and taken several at a time, they give a more complete picture of the conditional distribution than a single estimate of the center. The current paper first presents an iterative algorithm for finding sample quantiles without sorting and then explores a generalization of the algorithm to nonlinear quantile regression. Our quantile regression algorithm is termed an MM, or MajorizeMinimize, algorithm because it entails majorizing the objective function by a quadratic function followed by minimizing that quadratic. The algorithm is conceptually simple and easy to code, and our numerical tests suggest that it is computationally competitive with a recent interior point algorithm for most problems.
An Optimization Transfer Algorithm for Quantile Regression
, 1998
"... The q quantiles of an integrable random variable solve a minimization problem involving a certain expectation. This optimality principle suggests an algorithm for finding a sample quantile without sorting. The current paper explores generalizations of the algorithm to nonlinear quantile regression. ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
The q quantiles of an integrable random variable solve a minimization problem involving a certain expectation. This optimality principle suggests an algorithm for finding a sample quantile without sorting. The current paper explores generalizations of the algorithm to nonlinear quantile regression. In particular, we devise a quantile regression algorithm that operates by transferring optimization of the nondifferentiable objective function to a quadratic function. It is conceptually simple and easy to code, and our numerical tests suggest that it is computationally competitive with the interior point algorithm of Koenker and Park (1996).
Computing Estimates in the Proportional Odds Model
, 1999
"... In this paper we present an algorithm for maximum likelihood estimation in the proportional odds model. The algorithm is an example of optimization transfer, also known as the method of iterative majorization. We discuss optimization transfer, present the proportional odds algorithm, and give a mean ..."
Abstract
 Add to MetaCart
In this paper we present an algorithm for maximum likelihood estimation in the proportional odds model. The algorithm is an example of optimization transfer, also known as the method of iterative majorization. We discuss optimization transfer, present the proportional odds algorithm, and give a means for accelerating the convergence of the algorithm. The algorithm is stable and guaranteed to converge to the maximum likelihood estimate when it exists, regardless of the starting points. For large problems, both the algorithm and an accelerated version of it outperform the NewtonRaphson method. Key Words: iterative majorization, proportional odds, optimization transfer, NewtonRaphson, quasiNewton. 1 Introduction In a survival analysis setting with rightcensored data, we observe n iid copies of X = (T C; ffi ), where T is the time until the occurrence of some event of interest, C is a censoring time, and ffi is the censoring indicator: ffi = 0 if T C, ffi = 1 if T ! C. The propo...