Results 1  10
of
13
Interiorpoint Methods
, 2000
"... The modern era of interiorpoint methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadrati ..."
Abstract

Cited by 505 (17 self)
 Add to MetaCart
(Show Context)
The modern era of interiorpoint methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadratic programming, semidefinite programming, and nonconvex and nonlinear problems, have reached varying levels of maturity. We review some of the key developments in the area, including comments on both the complexity theory and practical algorithms for linear programming, semidefinite programming, monotone linear complementarity, and convex programming over sets that can be characterized by selfconcordant barrier functions.
LOQO: An interior point code for quadratic programming
, 1994
"... ABSTRACT. This paper describes a software package, called LOQO, which implements a primaldual interiorpoint method for general nonlinear programming. We focus in this paper mainly on the algorithm as it applies to linear and quadratic programming with only brief mention of the extensions to convex ..."
Abstract

Cited by 166 (9 self)
 Add to MetaCart
(Show Context)
ABSTRACT. This paper describes a software package, called LOQO, which implements a primaldual interiorpoint method for general nonlinear programming. We focus in this paper mainly on the algorithm as it applies to linear and quadratic programming with only brief mention of the extensions to convex and general nonlinear programming, since a detailed paper describing these extensions were published recently elsewhere. In particular, we emphasize the importance of establishing and maintaining symmetric quasidefiniteness of the reduced KKT system. We show that the industry standard MPS format can be nicely formulated in such a way to provide quasidefiniteness. Computational results are included for a variety of linear and quadratic programming problems. 1.
A PathFollowing InteriorPoint Algorithm for Linear and Quadratic Problems
 Preprint MCSP4011293, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439
, 1995
"... We describe an algorithm for the monotone linear complementarity problem (LCP) that converges from any positive, not necessarily feasible, starting point and exhibits polynomial complexity if some additional assumptions are made on the starting point. If the problem has a strictly complementary solu ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
We describe an algorithm for the monotone linear complementarity problem (LCP) that converges from any positive, not necessarily feasible, starting point and exhibits polynomial complexity if some additional assumptions are made on the starting point. If the problem has a strictly complementary solution, the method converges subquadratically. We show that the algorithm and its convergence properties extend readily to the mixed monotone linear complementarity problem and, hence, to all the usual formulations of the linear programming and convex quadratic programming problems. 1 Introduction The monotone linear complementarityproblem (LCP) is to find a vector pair (x; y) 2 IR n \ThetaIR n such that y = Mx+ q; (x; y) 0; x T y = 0; (1) where q 2 IR n and M is an n \Theta n positive semidefinite (p.s.d.) matrix. The mixed monotone linear complementarity problem (MLCP) is to find a vector triple (x; y; z) 2 IR n \Theta IR n \Theta IR m such that " y 0 # = " M 11 M 12 ...
Stability Of Linear Equations Solvers In InteriorPoint Methods
 SIAM J. Matrix Anal. Appl
, 1994
"... . Primaldual interiorpoint methods for linear complementarity and linear programming problems solve a linear system of equations to obtain a modified Newton step at each iteration. These linear systems become increasingly illconditioned in the later stages of the algorithm, but the computed steps ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
(Show Context)
. Primaldual interiorpoint methods for linear complementarity and linear programming problems solve a linear system of equations to obtain a modified Newton step at each iteration. These linear systems become increasingly illconditioned in the later stages of the algorithm, but the computed steps are often sufficiently accurate to be useful. We use error analysis techniques tailored to the special structure of these linear systems to explain this observation and examine how theoretically superlinear convergence of a pathfollowing algorithm is affected by the roundoff errors. Key words. primaldual interiorpoint methods, error analysis, stability AMS(MOS) subject classifications. 65G05, 65F05, 90C33 1. Introduction. The monotone linear complementarity problem (LCP) is the problem of finding a vector pair (x; y) 2 R l n \Theta R l n such that y = Mx+ q; (x; y) 0; x T y = 0; (1) where M (a real, n \Theta n positive semidefinite matrix) and q (a real vector with n elements...
Stability of Augmented System Factorizations in InteriorPoint Methods
 SIAM J. Matrix Anal. Appl
, 1997
"... . Some implementations of interiorpoint algorithms obtain their search directions by solving symmetric indefinite systems of linear equations. The conditioning of the coefficient matrices in these socalled augmentedsystems deteriorates on later iterations, as some of the diagonal elements grow wit ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
(Show Context)
. Some implementations of interiorpoint algorithms obtain their search directions by solving symmetric indefinite systems of linear equations. The conditioning of the coefficient matrices in these socalled augmentedsystems deteriorates on later iterations, as some of the diagonal elements grow without bound. Despite this apparent difficulty, the steps produced by standard factorization procedures are often accurate enough to allow the interiorpoint method to converge to high accuracy. When the underlying linear program is nondegenerate, we show that convergence to arbitrarily high accuracy occurs, at a rate that closely approximates the theory. We also explain and demonstrate what happens when the linear program is degenerate, where convergence to acceptable accuracy (but not arbitrarily high accuracy) is usually obtained. 1. Introduction. We focus on the core linear algebra operation in primaldual interiorpoint methods for linear programming: solution of a system of linear equat...
A Superquadratic InfeasibleInteriorPoint Method for Linear Complementarity Problems
 Preprint MCSP4180294, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439
, 1996
"... We consider a modification of a pathfollowing infeasibleinteriorpoint algorithm described by Wright. In the new algorithm, we attempt to improve each major iterate by reusing the coefficient matrix factors from the latest step. We show that the modified algorithm has similar theoretical global co ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
(Show Context)
We consider a modification of a pathfollowing infeasibleinteriorpoint algorithm described by Wright. In the new algorithm, we attempt to improve each major iterate by reusing the coefficient matrix factors from the latest step. We show that the modified algorithm has similar theoretical global convergence properties to those of the earlier algorithm, while its asymptotic convergence rate can be made superquadratic by an appropriate parameter choice. 1 Introduction We describe an algorithm for solving the monotone linear complementarity problem (LCP), in which we aim to find a vector pair (x; y) with y = Mx+ q; (x; y) 0; x T y = 0; (1) where q 2 IR n and M is an n \Theta n positive semidefinite matrix. The solution set to (1) is denoted by S, while the set S c of strictly complementary solutions is defined as S c = f(x ; y ) 2 S j x + y ? 0g: Our algorithm can be viewed as a modified form of Newton's method applied to the 2n \Theta 2n system y = Mx+ q; x i y i...
A fullNewton step O(n) infeasible interiorpoint algorithm for linear optimization
, 2005
"... We present a primaldual infeasible interiorpoint algorithm. As usual, the algorithm decreases the duality gap and the feasibility residuals at the same rate. Assuming that an optimal solution exists it is shown that at most O(n) iterations suffice to reduce the duality gap and the residuals by the ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
(Show Context)
We present a primaldual infeasible interiorpoint algorithm. As usual, the algorithm decreases the duality gap and the feasibility residuals at the same rate. Assuming that an optimal solution exists it is shown that at most O(n) iterations suffice to reduce the duality gap and the residuals by the factor 1/e. This implies an O(nlog(n/ε)) iteration bound for getting an εsolution of the problem at hand, which coincides with the best known bound for infeasible interiorpoint algorithms. The algorithm constructs strictly feasible iterates for a sequence of perturbations of the given problem and its dual problem. A special feature of the algorithm is that it uses only fullNewton steps. Two types of fullNewton steps are used, socalled feasibility steps and usual (centering) steps. Starting at strictly feasible iterates of a perturbed pair, (very) close its central path, feasibility steps serve to generate strictly feasible iterates for the next perturbed pair. By accomplishing a few centering steps for the new perturbed pair we obtain strictly feasible iterates close enough to the central path of the new perturbed pair. The algorithm finds an optimal solution or detects infeasibility or unboundedness of the given problem.
A Superlinear InfeasibleInteriorPoint Affine Scaling Algorithm For LCP
 LCP, Preprint MCSP3610693, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne
, 1993
"... . We present an infeasibleinteriorpoint algorithm for monotone linear complementarity problems in which the search directions are affine scaling directions and the step lengths are obtained from simple formulae that ensure both global and superlinear convergence. By choosing the value of a paramet ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
. We present an infeasibleinteriorpoint algorithm for monotone linear complementarity problems in which the search directions are affine scaling directions and the step lengths are obtained from simple formulae that ensure both global and superlinear convergence. By choosing the value of a parameter in appropriate ways, polynomial complexity and convergence with Qorder up to (but not including) two can be achieved. The only assumption made to obtain the superlinear convergence is the existence of a solution satisfying strict complementarity. Key words. infeasibleinteriorpoint methods, monotone linear complementarity problems, superlinear convergence 1. Introduction. The monotone linear complementarity problem (LCP) is to find a vector pair (x; y) 2 IR n 2 IR n that satisfies the following conditions: y = Mx+ q; (1.1a) x 0; y 0; (1.1b) x T y = 0; (1.1c) where M is a positive semidefinite matrix. We use S to denote the solution set of (1.1) and S c to denote the set of...
Sensitivity Analysis And The Analytic Central Path
, 1998
"... The analytic central path for linear programming has been studied because of its desirable convergence properties. This dissertation presents a detailed study of the analytic central path under perturbation of both the righthand side and cost vectors for a linear program. The analysis is divided int ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
The analytic central path for linear programming has been studied because of its desirable convergence properties. This dissertation presents a detailed study of the analytic central path under perturbation of both the righthand side and cost vectors for a linear program. The analysis is divided into three parts: extensions of results required by the convergence analysis when the data is unperturbed to include that case of data perturbation, marginal analysis of the analytic center solution with respect to linear changes in the righthand side, and parametric analysis of the analytic central path under simultaneous changes in both the righthand side and cost vectors. To extend the established convergence results when the data is fixed, it is rst shown that the union of the elements comprising a portion of the perturbed analytic central paths is bounded. This guarantees the existence of subsequences that converge, but these subsequences are not guaranteed to have the same limit without further restrictions on the data movement. Sufficient conditions are provided to insure that the limit is the analytic center of the iii limiting polytope. Furthermore, as long at the data converges and the parameter of the path is approaching zero, certain components of the the analytic central path are forced to zero. Since the introduction of the analytic center to the mathematical programming community, the analytic central path has been known to be analytic in both the righthand side and cost vectors. However, since the objective function is a continuous, piecewise linear function of the righthand side, the analytic center solution is not differentiable. We show that this solution is continuous and is infinitely, continuously, onesided differentiable. Furthermore, the analytic center sol...
An Infeasible InteriorPoint Algorithm with fullNewton Step for Linear Optimization
"... In this paper we present an infeasible interiorpoint algorithm for solving linear optimization problems. This algorithm is obtained by modifying the search direction in the algorithm [8]. The analysis of our algorithm is much simpler than that of the algorithm [8] at some places. The iteration boun ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper we present an infeasible interiorpoint algorithm for solving linear optimization problems. This algorithm is obtained by modifying the search direction in the algorithm [8]. The analysis of our algorithm is much simpler than that of the algorithm [8] at some places. The iteration bound of the algorithm is as good as the best known iteration bound O ( n log 1 ε for IIPMs.