Results 1  10
of
18
Implementation of Interior Point Methods for Large Scale Linear Programming
 in Interior Point Methods in Mathematical Programming
, 1996
"... In the past 10 years the interior point methods (IPM) for linear programming have gained extraordinary interest as an alternative to the sparse simplex based methods. This has initiated a fruitful competition between the two types of algorithms which has lead to very efficient implementations on bot ..."
Abstract

Cited by 70 (22 self)
 Add to MetaCart
In the past 10 years the interior point methods (IPM) for linear programming have gained extraordinary interest as an alternative to the sparse simplex based methods. This has initiated a fruitful competition between the two types of algorithms which has lead to very efficient implementations on both sides. The significant difference between interior point and simplex based methods is reflected not only in the theoretical background but also in the practical implementation. In this paper we give an overview of the most important characteristics of advanced implementations of interior point methods. First, we present the infeasibleprimaldual algorithm which is widely considered the most efficient general purpose IPM. Our discussion includes various algorithmic enhancements of the basic algorithm. The only shortcoming of the "traditional" infeasibleprimaldual algorithm is to detect a possible primal or dual infeasibility of the linear program. We discuss how this problem can be solve...
Multiple Centrality Corrections in a PrimalDual Method for Linear Programming
 COMPUTATIONAL OPTIMIZATION AND APPLICATIONS
, 1995
"... A modification of the (infeasible) primaldual interior point method is developed. The method uses multiple corrections to improve the centrality of the current iterate. The maximum number of corrections the algorithm is encouraged to make depends on the ratio of the efforts to solve and to factoriz ..."
Abstract

Cited by 48 (11 self)
 Add to MetaCart
A modification of the (infeasible) primaldual interior point method is developed. The method uses multiple corrections to improve the centrality of the current iterate. The maximum number of corrections the algorithm is encouraged to make depends on the ratio of the efforts to solve and to factorize the KKT systems. For any LP problem, this ratio is determined right after preprocessing the KKT system and prior to the optimization process. The harder the factorization, the more advantageous the higherorder corrections might prove to be. The computational performance of the method is studied on more difficult Netlib problems as well as on tougher and larger reallife LP models arising from applications. The use of multiple centrality corrections gives on the average a 25% to 40% reduction in the number of iterations compared with the widely used secondorder predictorcorrector method. This translates into 20% to 30% savings in CPU time.
A superlinearly convergent predictorcorrector method for degenerate LCP in a wide neighborhood of the central path with O (√n L)iteration complexity
, 2006
"... ..."
On a Homogeneous Algorithm for the Monotone Complementarity Problem
 Mathematical Programming
, 1995
"... We present a generalization of a homogeneous selfdual linear programming (LP) algorithm to solving the monotone complementarity problem (MCP). The algorithm does not need to use any "bigM" parameter or twophase method, and it generates either a solution converging towards feasibility and compleme ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
We present a generalization of a homogeneous selfdual linear programming (LP) algorithm to solving the monotone complementarity problem (MCP). The algorithm does not need to use any "bigM" parameter or twophase method, and it generates either a solution converging towards feasibility and complementarity simultaneously or a certificate proving infeasibility. Moreover, if the MCP is polynomially solvable with an interior feasible starting point, then it can be polynomially solved without using or knowing such information at all. To our knowledge, this is the first interiorpoint and infeasiblestarting algorithm for solving the MCP that possesses these desired features. Preliminary computational results are presented. Key words: Monotone complementarity problem, homogeneous and selfdual, infeasiblestarting algorithm. Running head: A homogeneous algorithm for MCP. Department of Management, Odense University, Campusvej 55, DK5230 Odense M, Denmark, email: eda@busieco.ou.dk. y De...
Interior Point Algorithms For Linear Complementarity Problems Based On Large Neighborhoods Of The Central Path
 SIAM J. on Optimization
, 1998
"... In this paper we study a firstorder and a highorder algorithm for solving linear complementarity problems. These algorithms are implicitly associated with a large neighborhood whose size may depend on the dimension of the problems. The complexity of these algorithms depends on the size of the neig ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
In this paper we study a firstorder and a highorder algorithm for solving linear complementarity problems. These algorithms are implicitly associated with a large neighborhood whose size may depend on the dimension of the problems. The complexity of these algorithms depends on the size of the neighborhood. For the first order algorithm, we achieve the complexity bound which the typical largestep algorithms possess. It is wellknown that the complexity of largestep algorithms is greater than that of shortstep ones. By using highorder power series (hence the name highorder algorithm), the iteration complexity can be reduced. We show that the complexity upper bound for our highorder algorithms is equal to that for shortstep algorithms. Key Words: Interior point algorithm, Highorder power series, Large neighborhood, Large step, Complexity, Linear complementarity problem. Abbreviated Title: Interior point algorithms based on large neighborhoods AMS(MOS) subject classifications: 90...
Predictorcorrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path
 Optimization Methods and Software
"... Abstract. A higher order correctorpredictor interiorpoint method is proposed for solving sufficient linear complementarity problems. The algorithm produces a sequence of iterates in the N − ∞ neighborhood of the central path. The algorithm does not depend on the handicap κ of the problem. It has O ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
Abstract. A higher order correctorpredictor interiorpoint method is proposed for solving sufficient linear complementarity problems. The algorithm produces a sequence of iterates in the N − ∞ neighborhood of the central path. The algorithm does not depend on the handicap κ of the problem. It has O((1 + κ) √ nL) iteration complexity and is superlinearly convergent even for degenerate problems. Key words. neighborhood linear complementarity, interiorpoint, pathfollowing, correctorpredictor, wide AMS subject classifications. 90C51, 90C33 1. Introduction. The MTY predictorcorrector algorithm proposed by Mizuno, Todd and Ye [9] is a typical representative of a large class of MTY type predictorcorrector methods, which play a very important role among primaldual interior point methods. It was the first algorithm for linear programming (LP) that had both polynomial complexity and superlinear convergence. This result was extended to monotone
A New Class of Polynomial PrimalDual Methods for Linear and Semidefinite Optimization
, 1999
"... We propose a new class of primaldual methods for linear optimization (LO). By using some new analysis tools, we prove that the large update method for LO based on the new search direction has a polynomial complexity O i n 4 4+ae log n " j iterations where ae 2 [0; 2] is a parameter used in t ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
We propose a new class of primaldual methods for linear optimization (LO). By using some new analysis tools, we prove that the large update method for LO based on the new search direction has a polynomial complexity O i n 4 4+ae log n " j iterations where ae 2 [0; 2] is a parameter used in the system defining the search direction. If ae = 0, our results reproduce the well known complexity of the standard primal dual Newton method for LO. At each iteration, our algorithm needs only to solve a linear equation system. An extension of the algorithms to semidefinite optimization is also presented. Keywords: Linear Optimization, Semidefinite Optimization, Interior Point Method, PrimalDual Newton Method, Polynomial Complexity. AMS Subject Classification: 90C05 1 Introduction Interior point methods (IPMs) are among the most effective methods for solving wide classes of optimization problems. Since the seminal work of Karmarkar [7], many researchers have proposed and analyzed various ...
Selfregular proximities and new search directions for linear and semidefinite optimization
 Mathematical Programming
, 2000
"... In this paper, we first introduce the notion of selfregular functions. Various appealing properties of selfregular functions are explored and we also discuss the relation between selfregular functions and the wellknown selfconcordant functions. Then we use such functions to define selfregular p ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
In this paper, we first introduce the notion of selfregular functions. Various appealing properties of selfregular functions are explored and we also discuss the relation between selfregular functions and the wellknown selfconcordant functions. Then we use such functions to define selfregular proximity measure for pathfollowing interior point methods for solving linear optimization (LO) problems. Any selfregular proximity measure naturally defines a primaldual search direction. In this way a new class of primaldual search directions for solving LO problems is obtained. Using the appealing properties of selfregular functions, we prove that these new largeupdate pathfollowing methods for LO enjoy a polynomial, O n q+1 2q log n iteration bound, where q ≥ 1 is the socalled barrier degree of the selfregular ε proximity measure underlying the algorithm. When q increases, this � bound approaches the √n n best known complexity bound for interior point methods, namely O log. Our unified �√n ε n analysis provides also the O log best known iteration bound of smallupdate IPMs. ε At each iteration, we need only to solve one linear system. As a byproduct of our results, we remove some limitations of the algorithms presented in [24] and improve their complexity as well. An extension of these results to semidefinite optimization (SDO) is also discussed.
A New and Efficient LargeUpdate InteriorPoint Method for Linear Optimization
, 2001
"... Recently, in [10], the authors presented a new largeupdate primaldual method for Linear Optimization, whose O(n 2 3 log n " ) iteration bound substantially improved the classical bound for such methods, which is O n log n " . In this paper we present an improved analysis of the new method. ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Recently, in [10], the authors presented a new largeupdate primaldual method for Linear Optimization, whose O(n 2 3 log n " ) iteration bound substantially improved the classical bound for such methods, which is O n log n " . In this paper we present an improved analysis of the new method. The analysis uses some new mathematical tools, partially developed in [11], where we consider a whole family of interiorpoint methods which contains the method considered in this paper. The new analysis yields an O p n log n log n " iteration bound for largeupdate methods. Since we concentrate on one specic member of the family considered in [11], the analysis is signicantly simpler than in [11]. The new bound further improves the iteration bound for largeupdate methods, and is quite close to the currently best iteration bound known for interiorpoint methods, namely O p n log n " . Hence, the existing gap between the iteration bounds for smallupdate and largeupdate met...
On Mehrotratype predictorcorrector algorithms
, 2005
"... In this paper we discuss the polynomiality of a feasible version of Mehrotra’s predictorcorrector algorithm whose variants have been widely used in several IPM based optimization packages. A numerical example is given that shows that the adaptive choice of centering parameter and correction terms i ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
In this paper we discuss the polynomiality of a feasible version of Mehrotra’s predictorcorrector algorithm whose variants have been widely used in several IPM based optimization packages. A numerical example is given that shows that the adaptive choice of centering parameter and correction terms in this algorithm may lead to small steps being taken in order to keep the iterates in a large neighborhood of the central path, which is important to proving polynomial complexity properties of this method. Motivated by this example, we introduce a safeguard in Mehrtora’s algorithm that keeps the iterates in the prescribed neighborhood and allows us to obtain a positive lower bound on the step size. This safeguard strategy is also used when the affine scaling direction performs poorly. We prove that the safeguarded algorithm will terminate after at most O(n2 log (x0) T s0 ɛ) iteration. By modestly modifying the corrector direction, we reduce the iteration complexity to O(n log (x0) T s0 ɛ). To ensure fast asymptotic convergence of the algorithm, we changed Mehrotra’s updating scheme of the centering parameter slightly while keeping the safeguard. The new algorithms have the same order of iteration complexity as the safeguarded algorithms, but enjoy superlinear convergence as well. Numerical results using the McIPM and LIPSOL software packages are reported.