Results 1  10
of
10
LOQO: An interior point code for quadratic programming
, 1994
"... ABSTRACT. This paper describes a software package, called LOQO, which implements a primaldual interiorpoint method for general nonlinear programming. We focus in this paper mainly on the algorithm as it applies to linear and quadratic programming with only brief mention of the extensions to convex ..."
Abstract

Cited by 156 (9 self)
 Add to MetaCart
ABSTRACT. This paper describes a software package, called LOQO, which implements a primaldual interiorpoint method for general nonlinear programming. We focus in this paper mainly on the algorithm as it applies to linear and quadratic programming with only brief mention of the extensions to convex and general nonlinear programming, since a detailed paper describing these extensions were published recently elsewhere. In particular, we emphasize the importance of establishing and maintaining symmetric quasidefiniteness of the reduced KKT system. We show that the industry standard MPS format can be nicely formulated in such a way to provide quasidefiniteness. Computational results are included for a variety of linear and quadratic programming problems. 1.
Implementation of Interior Point Methods for Large Scale Linear Programming
 in Interior Point Methods in Mathematical Programming
, 1996
"... In the past 10 years the interior point methods (IPM) for linear programming have gained extraordinary interest as an alternative to the sparse simplex based methods. This has initiated a fruitful competition between the two types of algorithms which has lead to very efficient implementations on bot ..."
Abstract

Cited by 70 (22 self)
 Add to MetaCart
In the past 10 years the interior point methods (IPM) for linear programming have gained extraordinary interest as an alternative to the sparse simplex based methods. This has initiated a fruitful competition between the two types of algorithms which has lead to very efficient implementations on both sides. The significant difference between interior point and simplex based methods is reflected not only in the theoretical background but also in the practical implementation. In this paper we give an overview of the most important characteristics of advanced implementations of interior point methods. First, we present the infeasibleprimaldual algorithm which is widely considered the most efficient general purpose IPM. Our discussion includes various algorithmic enhancements of the basic algorithm. The only shortcoming of the "traditional" infeasibleprimaldual algorithm is to detect a possible primal or dual infeasibility of the linear program. We discuss how this problem can be solve...
Symmetric quasidefinite matrices
 SIAM Journal on Optimization
, 1995
"... We say that a symmetric matrix K is quasidefinite if it has the form ..."
Abstract

Cited by 54 (3 self)
 Add to MetaCart
We say that a symmetric matrix K is quasidefinite if it has the form
Presolve Analysis of Linear Programs Prior to Applying an Interior Point Method
 INFORMS Journal on Computing
, 1994
"... Several issues concerning an analysis of large and sparse linear programming problems prior to solving them with an interior point based optimizer are addressed in this paper. Three types of presolve procedures are distinguished. Routines from the first class repeatedly analyze an LP problem formula ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
Several issues concerning an analysis of large and sparse linear programming problems prior to solving them with an interior point based optimizer are addressed in this paper. Three types of presolve procedures are distinguished. Routines from the first class repeatedly analyze an LP problem formulation: eliminate empty or singleton rows and columns, look for primal and dual forcing or dominated constraints, tighten bounds for variables and shadow prices or just the opposite, relax them to find implied free variables. The second type of analysis aims at reducing a fillin of the Cholesky factor of the normal equations matrix used to compute orthogonal projections and includes a heuristic for increasing the sparsity of the LP constraint matrix and a technique of splitting dense columns in it. Finally, routines from the third class detect, and remove, different linear dependecies of rows and columns in a constraint matrix. Computational results on problems from the Netlib collection, inc...
On a Homogeneous Algorithm for the Monotone Complementarity Problem
 Mathematical Programming
, 1995
"... We present a generalization of a homogeneous selfdual linear programming (LP) algorithm to solving the monotone complementarity problem (MCP). The algorithm does not need to use any "bigM" parameter or twophase method, and it generates either a solution converging towards feasibility and compleme ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
We present a generalization of a homogeneous selfdual linear programming (LP) algorithm to solving the monotone complementarity problem (MCP). The algorithm does not need to use any "bigM" parameter or twophase method, and it generates either a solution converging towards feasibility and complementarity simultaneously or a certificate proving infeasibility. Moreover, if the MCP is polynomially solvable with an interior feasible starting point, then it can be polynomially solved without using or knowing such information at all. To our knowledge, this is the first interiorpoint and infeasiblestarting algorithm for solving the MCP that possesses these desired features. Preliminary computational results are presented. Key words: Monotone complementarity problem, homogeneous and selfdual, infeasiblestarting algorithm. Running head: A homogeneous algorithm for MCP. Department of Management, Odense University, Campusvej 55, DK5230 Odense M, Denmark, email: eda@busieco.ou.dk. y De...
Solving reduced KKT systems in barrier methods for linear and quadratic programming
, 1991
"... In barrier methods for constrained optimization, the main work lies in solving large linear systems Kp = r, where K is symmetric and indefinite. For linear programs, these KKT systems are usually reduced to smaller positivedefinite systems AH −1 A T q = s, where H is a large principal submatrix of ..."
Abstract

Cited by 22 (7 self)
 Add to MetaCart
In barrier methods for constrained optimization, the main work lies in solving large linear systems Kp = r, where K is symmetric and indefinite. For linear programs, these KKT systems are usually reduced to smaller positivedefinite systems AH −1 A T q = s, where H is a large principal submatrix of K. These systems can be solved more efficiently, but AH −1 A T is typically more illconditioned than K. In order to improve the numerical properties of barrier implementations, we discuss the use of “reduced KKT systems”, whose dimension and condition lie somewhere in between those of K and AH −1 A T. The approach applies to linear programs and to positive semidefinite quadratic programs whose Hessian H is at least partially diagonal. We have implemented reduced KKT systems in a primaldual algorithm for linear programming, based on the sparse indefinite solver MA27 from the Harwell Subroutine Library. Some features of the algorithm are presented, along with results on the netlib LP test set.
A Computational View of InteriorPoint Methods for Linear Programming
 IN: ADVANCES IN LINEAR AND INTEGER PROGRAMMING
, 1994
"... Many issues that are crucial for an efficient implementation of an interior point algorithm are addressed in this paper. To start with, a prototype primaldual algorithm is presented. Next, many tricks that make it so efficient in practice are discussed in detail. Those include: the preprocessing te ..."
Abstract

Cited by 15 (10 self)
 Add to MetaCart
Many issues that are crucial for an efficient implementation of an interior point algorithm are addressed in this paper. To start with, a prototype primaldual algorithm is presented. Next, many tricks that make it so efficient in practice are discussed in detail. Those include: the preprocessing techniques, the initialization approaches, the methods of computing search directions (and lying behind them linear algebra techniques), centering strategies and methods of stepsize selection. Several reasons for the manifestations of numerical difficulties like e.g.: the primal degeneracy of optimal solutions or the lack of feasible solutions are explained in a comprehensive way. A motivation for obtaining an optimal basis is given and a practicable algorithm to perform this task is presented. Advantages of different methods to perform postoptimal analysis (applicable to interior point optimal solutions) are discussed. Important questions that still remain open in the implementations of i...
A Note On The Ldl Decomposition Of Matrices From SaddlePoint Problems
 SIAM J. Matrix Anal. Appl
, 2002
"... Sparse linear systems Kx = b are considered where K is a specially structured symmetric indefinite matrix. These systems arise frequently, e.g., from mixed finite element discretizations of PDE problems. The LDL^T factorization of K with diagonal D and unit lower triangular L is known to exist for n ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Sparse linear systems Kx = b are considered where K is a specially structured symmetric indefinite matrix. These systems arise frequently, e.g., from mixed finite element discretizations of PDE problems. The LDL^T factorization of K with diagonal D and unit lower triangular L is known to exist for natural ordering of K but the resulting triangular factors can be rather dense. On the other hand, for a given permutation matrix P , the LDL^T factorization of P^T KP may not exist. In this paper a new way to obtain a fillin minimizing permutation based on initial...
Solving Multistage Stochastic Programs With Tree Dissection
, 1991
"... One component of every multistage stochastic program is a filtration that determines the notion of which random events are observable at each stage of the evolution. Within the context of interiorpoint methods, we describe an efficient preordering technique, called filtered dissection, that takes ..."
Abstract
 Add to MetaCart
One component of every multistage stochastic program is a filtration that determines the notion of which random events are observable at each stage of the evolution. Within the context of interiorpoint methods, we describe an efficient preordering technique, called filtered dissection, that takes advantage of the filtration's structure to dramatically reduce fillin in the factorization as compared with methods such as the default methods employed by cplexbarrier and loqo. We have implemented this technique as a minor modification to loqo, and it produces a roughly 200fold performance improvement. In particular, we have solved a previouslyunsolvable, realworld, 6stage financial investment problem having 800K equations and 1,200K variables (and 8,192 points in its sample space) using a single processor SGI workstation. The filtered dissection algorithm applies in a natural manner to generic (linear and convex) multistage stochastic programs. The approach promises to eliminate t...
Solving Multistage Stochastic Programs With
, 1991
"... One componentofevery multistage stochastic program is a #ltration that determines the notion of which random events are observable at each stage of the evolution. Within the context of interiorpoint methods, we describe an e#cient preordering technique, called #ltered dissection, that takes a ..."
Abstract
 Add to MetaCart
One componentofevery multistage stochastic program is a #ltration that determines the notion of which random events are observable at each stage of the evolution. Within the context of interiorpoint methods, we describe an e#cient preordering technique, called #ltered dissection, that takes advantage of the #ltration's structure to dramatically reduce #llin in the factorization as compared with methods such as the default methods employed by cplexbarrier and loqo.Wehave implemented this technique as a minor modi#cation to loqo, and it produces a roughly 200fold performance improvement. In particular, wehave solved a previouslyunsolvable, realworld, 6stage #nancial investment problem having 800K equations and 1,200K variables #and 8,192 points in its sample space# using a single processor SGI workstation.