Results 1 
6 of
6
InteriorPoint Methods for Linear Optimization
, 2000
"... Everyone with some background in Mathematics knows how to solve a system of linear equalities, since it is the basic subject in Linear Algebra. In many practical problems, however, also inequalities play a role. For example, a budget usually may not be larger than some specified amount. In such situ ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
Everyone with some background in Mathematics knows how to solve a system of linear equalities, since it is the basic subject in Linear Algebra. In many practical problems, however, also inequalities play a role. For example, a budget usually may not be larger than some specified amount. In such situations one may end up with a system of linear relations that not only contains equalities but also inequalities. Solving such a system requires methods and theory that go beyond the standard Mathematical knowledge. Nevertheless the topic has a rich history and is tightly related to the important topic of Linear Optimization, where the object is to nd the optimal (minimal or maximal) value of a linear function subject to linear constraints on the variables; the constraints may be either equality or inequality constraints. Both from a theoretical and computational point of view both topics are equivalent. In this chapter we describe the ideas underlying a new class of solution methods...
New Complexity Analysis of the PrimalDual Newton Method for Linear Optimization
, 1998
"... We deal with the primaldual Newton method for linear optimization (LO). Nowadays, this method is the working horse in all efficient interior point algorithms for LO, and its analysis is the basic element in all polynomiality proofs of such algorithms. At present there is still a gap between the pra ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
We deal with the primaldual Newton method for linear optimization (LO). Nowadays, this method is the working horse in all efficient interior point algorithms for LO, and its analysis is the basic element in all polynomiality proofs of such algorithms. At present there is still a gap between the practical behavior of the algorithms and the theoretical performance results, in favor of the practical behavior. This is especially true for socalled largeupdate methods. We present some new analysis tools, based on a proximity measure introduced by Jansen et al., in 1994, that may help to close this gap. This proximity measure has not been used in the analysis of largeupdate method before. Our new analysis not only provides a unified way for the analysis of both largeupdate and smallupdate methods, but also improves the known iteration bounds.
Selfregular proximities and new search directions for linear and semidefinite optimization
 Mathematical Programming
, 2000
"... In this paper, we first introduce the notion of selfregular functions. Various appealing properties of selfregular functions are explored and we also discuss the relation between selfregular functions and the wellknown selfconcordant functions. Then we use such functions to define selfregular p ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
In this paper, we first introduce the notion of selfregular functions. Various appealing properties of selfregular functions are explored and we also discuss the relation between selfregular functions and the wellknown selfconcordant functions. Then we use such functions to define selfregular proximity measure for pathfollowing interior point methods for solving linear optimization (LO) problems. Any selfregular proximity measure naturally defines a primaldual search direction. In this way a new class of primaldual search directions for solving LO problems is obtained. Using the appealing properties of selfregular functions, we prove that these new largeupdate pathfollowing methods for LO enjoy a polynomial, O n q+1 2q log n iteration bound, where q ≥ 1 is the socalled barrier degree of the selfregular ε proximity measure underlying the algorithm. When q increases, this � bound approaches the √n n best known complexity bound for interior point methods, namely O log. Our unified �√n ε n analysis provides also the O log best known iteration bound of smallupdate IPMs. ε At each iteration, we need only to solve one linear system. As a byproduct of our results, we remove some limitations of the algorithms presented in [24] and improve their complexity as well. An extension of these results to semidefinite optimization (SDO) is also discussed.
Inverse Barrier Methods for Linear Programming
 REVUE RAIROOPERATIONS RESEARCH
, 1991
"... In the recent interior point methods for linear programming much attention has been given to the logarithmic barrier method. In this paper we will analyse the class of inverse barrier methods for linear programming, in which the barrier is P x \Gammar i , where r ? 0 is the rank of the barrier. ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
In the recent interior point methods for linear programming much attention has been given to the logarithmic barrier method. In this paper we will analyse the class of inverse barrier methods for linear programming, in which the barrier is P x \Gammar i , where r ? 0 is the rank of the barrier. There are many similarities with the logarithmic barrier method. The minima of an inverse barrier function for different values of the barrier parameter define a 'central path' dependent on r, called the rpath of the problem. For r # 0 this path coincides with the central path determined by the logarithmic barrier function. We introduce a metric to measure the distance of a feasible point to a point on the path. We prove that in a certain region around a point on the path the Newton process converges quadratically. Moreover, outside this region, taking a step into the Newton direction decreases the barrier function value at least with a constant. We will derive upper bounds for the total ...
A Unifying Investigation of InteriorPoint Methods for Convex Programming
 Faculty of Mathematics and Informatics, TU Delft, NL2628 BL
, 1992
"... In the recent past a number of papers were written that present low complexity interiorpoint methods for different classes of convex programs. Goal of this article is to show that the logarithmic barrier function associated with these programs is selfconcordant, and that the analyses of interiorpo ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
In the recent past a number of papers were written that present low complexity interiorpoint methods for different classes of convex programs. Goal of this article is to show that the logarithmic barrier function associated with these programs is selfconcordant, and that the analyses of interiorpoint methods for these programs can thus be reduced to the analysis of interiorpoint methods with selfconcordant barrier functions. Key words: interiorpoint method, barrier function, dual geometric programming, (extended) entropy programming, primal and dual l p programming, relative Lipschitz condition, scaled Lipschitz condition, selfconcordance. 1 Introduction The efficiency of a barrier method for solving convex programs strongly depends on the properties of the barrier function used. A key property that is sufficient to prove fast convergence for barrier methods is the property of selfconcordance introduced in [17]. This condition not only allows a proof of polynomial convergen...
An Easy Way to Teach Interior Point Methods
, 1998
"... In this paper the duality theory of Linear Optimization (LO) is built up based on ideas emerged from interior point methods. All we need is elementary calculus. We will embed the LO problem and its dual in a selfdual skewsymmetric problem. Most duality results, except the existence of a strictly ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper the duality theory of Linear Optimization (LO) is built up based on ideas emerged from interior point methods. All we need is elementary calculus. We will embed the LO problem and its dual in a selfdual skewsymmetric problem. Most duality results, except the existence of a strictly complementary solution, are trivial for this embedding problem. The existence of the central path and its convergence to the analytic center of the optimal face will be proved. The proof is based on an elementary, careful analysis of a Newton step. We show also that if an almost optimal solution on the central path is known, then a simple strongly polynomial rounding procedure provides a strictly complementary optimal solution. The allone vector is feasible for the embedding problem and it is an interior point on the central path. This way an elegant solution to the initialization of IPMs is obtained as well. This approach allows to apply any interior point method to the embedding problem ...