Results 1  10
of
10
Condition Measures and Properties of the Central Trajectory of a Linear Program
 Mathematical Programming
, 1997
"... Given a data instance d = (A; b; c) of a linear program, we show that certain properties of solutions along the central trajectory of the linear program are inherently related to the condition number C(d) of the data instance d = (A; b; c), where C(d) is a scaleinvariant reciprocal of a closelyrel ..."
Abstract

Cited by 34 (15 self)
 Add to MetaCart
Given a data instance d = (A; b; c) of a linear program, we show that certain properties of solutions along the central trajectory of the linear program are inherently related to the condition number C(d) of the data instance d = (A; b; c), where C(d) is a scaleinvariant reciprocal of a closelyrelated measure ae(d) called the "distance to illposedness." (The distance to illposedness essentially measures how close the data instance d = (A; b; c) is to being primal or dual infeasible.) We present lower and upper bounds on sizes of optimal solutions along the central trajectory, and on rates of change of solutions along the central trajectory, as either the barrier parameter ¯ or the data d = (A; b; c) of the linear program is changed. These bounds are all linear or polynomial functions of certain natural parameters associated with the linear program, namely the condition number C(d), the distance to illposedness ae(d), the norm of the data kdk, and the dimensions m and n. 1 Introdu...
A feasible BFGS interior point algorithm for solving strongly convex minimization problems
 SIAM J. OPTIM
, 2000
"... We propose a BFGS primaldual interior point method for minimizing a convex function on a convex set defined by equality and inequality constraints. The algorithm generates feasible iterates and consists in computing approximate solutions of the optimality conditions perturbed by a sequence of posit ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
We propose a BFGS primaldual interior point method for minimizing a convex function on a convex set defined by equality and inequality constraints. The algorithm generates feasible iterates and consists in computing approximate solutions of the optimality conditions perturbed by a sequence of positive parameters µ converging to zero. We prove that it converges qsuperlinearly for each fixed µ. We also show that it is globally convergent to the analytic center of the primaldual optimalset when µ tends to 0 and strict complementarity holds.
On the convergence of the Newton/logbarrier method
 Preprint ANL/MCSP681 0897, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, Ill
, 1997
"... Abstract. In the Newton/logbarrier method, Newton steps are taken for the logbarrier function for a xed value of the barrier parameter until a certain convergence criterion is satis ed. The barrier parameter is then decreased and the Newton process is repeated. A naive analysis indicates that Newt ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Abstract. In the Newton/logbarrier method, Newton steps are taken for the logbarrier function for a xed value of the barrier parameter until a certain convergence criterion is satis ed. The barrier parameter is then decreased and the Newton process is repeated. A naive analysis indicates that Newton's method does not exhibit superlinear convergence to the minimizer of each instance of the logbarrier function until it reaches a very small neighborhood of the minimizer. By partitioning according to the subspace of active constraint gradients, however, we show that this neighborhood is actually quite large, thus explaining why reasonably fast local convergence can be attained in practice. Moreover, we show that the overall convergence rate of the Newton/logbarrier algorithm is superlinear in the number of function/derivative evaluations, provided that the nonlinear program is formulated with a linear objective and that the schedule for decreasing the barrier parameter is related in a certain way to the convergence criterion for each Newton process. 1.
Degeneracy in Interior Point Methods for Linear Programming
, 1991
"... ... In this paper, we survey the various theoretical and practical issues related to degeneracy in IPM's for linear programming. We survey results which for the most part already appeared in the literature. Roughly speaking, we shall deal with four topics: the effect of degeneracy on the convergence ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
... In this paper, we survey the various theoretical and practical issues related to degeneracy in IPM's for linear programming. We survey results which for the most part already appeared in the literature. Roughly speaking, we shall deal with four topics: the effect of degeneracy on the convergence of IPM's, on the trajectories followed by the algorithms, the effect of degeneracy in numerical performance, and on finding basic solutions.
Polynomiality of PrimalDual Affine Scaling Algorithms for Nonlinear Complementarity Problems
, 1995
"... This paper provides an analysis of the polynomiality of primaldual interior point algorithms for nonlinear complementarity problems using a wide neighborhood. A condition for the smoothness of the mapping is used, which is related to Zhu's scaled Lipschitz condition, but is also applicable to mappi ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
This paper provides an analysis of the polynomiality of primaldual interior point algorithms for nonlinear complementarity problems using a wide neighborhood. A condition for the smoothness of the mapping is used, which is related to Zhu's scaled Lipschitz condition, but is also applicable to mappings that are not monotone. We show that a family of primaldual affine scaling algorithms generates an approximate solution (given a precision ffl) of the nonlinear complementarity problem in a finite number of iterations whose order is a polynomial of n, ln(1=ffl) and a condition number. If the mapping is linear then the results in this paper coincide with the ones in [13].
Fast ℓ1minimization algorithms and an application in robust face recognition: a review
, 2010
"... We provide a comprehensive review of five representative ℓ1minimization methods, i.e., gradient projection, homotopy, iterative shrinkagethresholding, proximal gradient, and augmented Lagrange multiplier. The repository is intended to fill in a gap in the existing literature to systematically bench ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
We provide a comprehensive review of five representative ℓ1minimization methods, i.e., gradient projection, homotopy, iterative shrinkagethresholding, proximal gradient, and augmented Lagrange multiplier. The repository is intended to fill in a gap in the existing literature to systematically benchmark the performance of these algorithms using a consistent experimental setting. The experiment will be focused on the application of face recognition, where a sparse representation framework has recently been developed to recover human identities from facial images that may be affected by illumination change, occlusion, and facial disguise. The paper also provides useful guidelines to practitioners working in similar fields. 1.
A Short Survey on Ten Years Interior Point Methods
, 1995
"... The introduction of Karmarkar's polynomial algorithm for linear programming (LP) in 1984 has influenced wide areas in the field of optimization. While in 80s emphasis was on developing and implementing efficient variants of interior point methods for LP, the nineties have shown applicability to c ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
The introduction of Karmarkar's polynomial algorithm for linear programming (LP) in 1984 has influenced wide areas in the field of optimization. While in 80s emphasis was on developing and implementing efficient variants of interior point methods for LP, the nineties have shown applicability to certain structured nonlinear programming and combinatorial problems. We will give a historical account of the developments and outline the major contributions to the field in the last decade. An important class of problems to which interior point methods are applicable is semidefinite optimization, which has recently gained much attention. It has a lot of applications in various fields (like control and system theory, combinatorial optimization, algebra, statistics, structural design) and can be efficiently solved with interior point methods.
Computing Maximum Likelihood Estimators of Convex Density Functions
, 1995
"... We consider the problem of estimating a density function that is known in advance to be convex. The maximum likelihood estimator is then the solution of a linearly constrained convex minimization problem. This problem turns out to be numerically difficult. We show that interior point algorithms p ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We consider the problem of estimating a density function that is known in advance to be convex. The maximum likelihood estimator is then the solution of a linearly constrained convex minimization problem. This problem turns out to be numerically difficult. We show that interior point algorithms perform well on this class of optimization problems, though for large samples, numerical difficulties are still encountered. To eliminate those difficulties, we propose a clustering scheme that is reasonable from a statistical point of view. We display results for problems with up to 40000 observations. We also give a typical picture of the estimated density: a piece wise linear function, with very few pieces only. Key words: interiorpoint method, convex estimation, maximum likelihood estimation, logarithmicbarrier method, primaldual method. iv 1 Introduction Finding a good statistical estimator can often be formulated as an unconstrained optimization problem whose objective func...
Degeneracy in Interior Point Methods for Linear Programming
, 1991
"... this paper, we survey the various theoretical and practical issues related to degeneracy in IPM's for linear programming. We survey results which for the most part already appeared in the literature. Roughly speaking, we shall deal with four topics: the effect of degeneracy on the convergence of IPM ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
this paper, we survey the various theoretical and practical issues related to degeneracy in IPM's for linear programming. We survey results which for the most part already appeared in the literature. Roughly speaking, we shall deal with four topics: the effect of degeneracy on the convergence of IPM's, on the trajectories followed by the algorithms, the effect of degeneracy in numerical performance, and on finding basic solutions. Key words: Linear programming, interior point methods, degeneracy, polynomial algorithms, global and local convergence, basis recovery, numerical performance, sensitivity analysis. 1 Introduction
Monotonicity Of The Lagrangian Function In The Parametric Interior Point Methods Of Convex Programming
, 1992
"... . Monotonicity of the Lagrangian function corresponding to the general root quasibarrier as well as to the general inverse barrier function of convex programming is proved. It is shown that monotonicity generally need not take place. On the other hand for LPproblems with some special structure mono ..."
Abstract
 Add to MetaCart
. Monotonicity of the Lagrangian function corresponding to the general root quasibarrier as well as to the general inverse barrier function of convex programming is proved. It is shown that monotonicity generally need not take place. On the other hand for LPproblems with some special structure monotonicity is proved for a very general class of interior point transformation functions. 1. Introduction Current interest in interior point methods for linear programming was sparked by the 1984 algorithm of N. Karmarkar [7] that is claimed to be much faster than the simplex method for practical problems. The equivalence of Karmarkar's projective scaling method to interior point methods was pointed out by Gill and others in 1986 [4]. Since then the interior point methods for convex programming have been intensively studied again. Recently some attention has been given to the monotonicity of the corresponding Lagrangian function (with respect to the considered parameter). The known results co...