Results 1  10
of
42
LOQO: An interior point code for quadratic programming
, 1994
"... ABSTRACT. This paper describes a software package, called LOQO, which implements a primaldual interiorpoint method for general nonlinear programming. We focus in this paper mainly on the algorithm as it applies to linear and quadratic programming with only brief mention of the extensions to convex ..."
Abstract

Cited by 156 (9 self)
 Add to MetaCart
ABSTRACT. This paper describes a software package, called LOQO, which implements a primaldual interiorpoint method for general nonlinear programming. We focus in this paper mainly on the algorithm as it applies to linear and quadratic programming with only brief mention of the extensions to convex and general nonlinear programming, since a detailed paper describing these extensions were published recently elsewhere. In particular, we emphasize the importance of establishing and maintaining symmetric quasidefiniteness of the reduced KKT system. We show that the industry standard MPS format can be nicely formulated in such a way to provide quasidefiniteness. Computational results are included for a variety of linear and quadratic programming problems. 1.
An InteriorPoint Algorithm For Nonconvex Nonlinear Programming
 COMPUTATIONAL OPTIMIZATION AND APPLICATIONS
, 1997
"... The paper describes an interiorpoint algorithm for nonconvex nonlinear programming which is a direct extension of interiorpoint methods for linear and quadratic programming. Major modifications include a merit function and an altered search direction to ensure that a descent direction for the mer ..."
Abstract

Cited by 144 (13 self)
 Add to MetaCart
The paper describes an interiorpoint algorithm for nonconvex nonlinear programming which is a direct extension of interiorpoint methods for linear and quadratic programming. Major modifications include a merit function and an altered search direction to ensure that a descent direction for the merit function is obtained. Preliminary numerical testing indicates that the method is robust. Further, numerical comparisons with MINOS and LANCELOT show that the method is efficient, and has the promise of greatly reducing solution times on at least some classes of models.
Interiorpoint methods for nonconvex nonlinear programming: Filter methods and merit functions
 Computational Optimization and Applications
, 2002
"... Abstract. In this paper, we present global and local convergence results for an interiorpoint method for nonlinear programming and analyze the computational performance of its implementation. The algorithm uses an ℓ1 penalty approach to relax all constraints, to provide regularization, and to bound ..."
Abstract

Cited by 84 (7 self)
 Add to MetaCart
Abstract. In this paper, we present global and local convergence results for an interiorpoint method for nonlinear programming and analyze the computational performance of its implementation. The algorithm uses an ℓ1 penalty approach to relax all constraints, to provide regularization, and to bound the Lagrange multipliers. The penalty problems are solved using a simplified version of Chen and Goldfarb’s strictly feasible interiorpoint method [12]. The global convergence of the algorithm is proved under mild assumptions, and local analysis shows that it converges Qquadratically for a large class of problems. The proposed approach is the first to simultaneously have all of the following properties while solving a general nonconvex nonlinear programming problem: (1) the convergence analysis does not assume boundedness of dual iterates, (2) local convergence does not require the Linear Independence Constraint Qualification, (3) the solution of the penalty problem is shown to locally converge to optima that may not satisfy the KarushKuhnTucker conditions, and (4) the algorithm is applicable to mathematical programs with equilibrium constraints. Numerical testing on a set of general nonlinear programming problems, including degenerate problems and infeasible problems, confirm the theoretical results. We also provide comparisons to a highlyefficient nonlinear solver and thoroughly analyze the effects of enforcing theoretical convergence guarantees on the computational performance of the algorithm. 1.
Interior Methods for Constrained Optimization
 Acta Numerica
, 1992
"... Interior methods for optimization were widely used in the 1960s, primarily in the form of barrier methods. However, they were not seriously applied to linear programming because of the dominance of the simplex method. Barrier methods fell from favour during the 1970s for a variety of reasons, includ ..."
Abstract

Cited by 83 (3 self)
 Add to MetaCart
Interior methods for optimization were widely used in the 1960s, primarily in the form of barrier methods. However, they were not seriously applied to linear programming because of the dominance of the simplex method. Barrier methods fell from favour during the 1970s for a variety of reasons, including their apparent inefficiency compared with the best available alternatives. In 1984, Karmarkar's announcement of a fast polynomialtime interior method for linear programming caused tremendous excitement in the field of optimization. A formal connection can be shown between his method and classical barrier methods, which have consequently undergone a renaissance in interest and popularity. Most papers published since 1984 have concentrated on issues of computational complexity in interior methods for linear programming. During the same period, implementations of interior methods have displayed great efficiency in solving many large linear programs of everincreasing size. Interior methods...
Interior methods for nonlinear optimization
 SIAM Review
, 2002
"... Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their ..."
Abstract

Cited by 76 (4 self)
 Add to MetaCart
Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar’s widely publicized announcement in 1984 of a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
Preconditioning indefinite systems in interior point methods for optimization
 Computational Optimization and Applications
, 2004
"... Abstract. Every Newton step in an interiorpoint method for optimization requires a solution of a symmetric indefinite system of linear equations. Most of today’s codes apply direct solution methods to perform this task. The use of logarithmic barriers in interior point methods causes unavoidable il ..."
Abstract

Cited by 44 (13 self)
 Add to MetaCart
Abstract. Every Newton step in an interiorpoint method for optimization requires a solution of a symmetric indefinite system of linear equations. Most of today’s codes apply direct solution methods to perform this task. The use of logarithmic barriers in interior point methods causes unavoidable illconditioning of linear systems and, hence, iterative methods fail to provide sufficient accuracy unless appropriately preconditioned. Two types of preconditioners which use some form of incomplete Cholesky factorization for indefinite systems are proposed in this paper. Although they involve significantly sparser factorizations than those used in direct approaches they still capture most of the numerical properties of the preconditioned system. The spectral analysis of the preconditioned matrix is performed: for convex optimization problems all the eigenvalues of this matrix are strictly positive. Numerical results are given for a set of public domain large linearly constrained convex quadratic programming problems with sizes reaching tens of thousands of variables. The analysis of these results reveals that the solution times for such problems on a modern PC are measured in minutes when direct methods are used and drop to seconds when iterative methods with appropriate preconditioners are used. Keywords: interiorpoint methods, iterative solvers, preconditioners 1.
Parallel InteriorPoint Solver for Structured Quadratic Programs: Application to Financial Planning Problems
, 2003
"... Many practical largescale optimization problems are not only sparse, but also display some form of blockstructure such as primal or dual block angular structure. Often these structures are nested: each block of the coarse top level structure is blockstructured itself. Problems with these charact ..."
Abstract

Cited by 41 (20 self)
 Add to MetaCart
Many practical largescale optimization problems are not only sparse, but also display some form of blockstructure such as primal or dual block angular structure. Often these structures are nested: each block of the coarse top level structure is blockstructured itself. Problems with these characteristics appear frequently in stochastic programming but also in other areas such as telecommunication network modelling. We present a linear algebra library tailored for problems with such structure that is used inside an interior point solver for convex quadratic programming problems. Due to its objectoriented design it can be used to exploit virtually any nested block structure arising in practical problems, eliminating the need for highly specialised linear algebra modules needing to be written for every type of problem separately. Through a careful implementation we achieve almost automatic parallelisation of the linear algebra. The efficiency of the approach is illustrated on several problems arising in the financial planning, namely in the asset and liability management. The problems are modelled as
On a Homogeneous Algorithm for the Monotone Complementarity Problem
 Mathematical Programming
, 1995
"... We present a generalization of a homogeneous selfdual linear programming (LP) algorithm to solving the monotone complementarity problem (MCP). The algorithm does not need to use any "bigM" parameter or twophase method, and it generates either a solution converging towards feasibility and compleme ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
We present a generalization of a homogeneous selfdual linear programming (LP) algorithm to solving the monotone complementarity problem (MCP). The algorithm does not need to use any "bigM" parameter or twophase method, and it generates either a solution converging towards feasibility and complementarity simultaneously or a certificate proving infeasibility. Moreover, if the MCP is polynomially solvable with an interior feasible starting point, then it can be polynomially solved without using or knowing such information at all. To our knowledge, this is the first interiorpoint and infeasiblestarting algorithm for solving the MCP that possesses these desired features. Preliminary computational results are presented. Key words: Monotone complementarity problem, homogeneous and selfdual, infeasiblestarting algorithm. Running head: A homogeneous algorithm for MCP. Department of Management, Odense University, Campusvej 55, DK5230 Odense M, Denmark, email: eda@busieco.ou.dk. y De...
Solving reduced KKT systems in barrier methods for linear and quadratic programming
, 1991
"... In barrier methods for constrained optimization, the main work lies in solving large linear systems Kp = r, where K is symmetric and indefinite. For linear programs, these KKT systems are usually reduced to smaller positivedefinite systems AH −1 A T q = s, where H is a large principal submatrix of ..."
Abstract

Cited by 22 (7 self)
 Add to MetaCart
In barrier methods for constrained optimization, the main work lies in solving large linear systems Kp = r, where K is symmetric and indefinite. For linear programs, these KKT systems are usually reduced to smaller positivedefinite systems AH −1 A T q = s, where H is a large principal submatrix of K. These systems can be solved more efficiently, but AH −1 A T is typically more illconditioned than K. In order to improve the numerical properties of barrier implementations, we discuss the use of “reduced KKT systems”, whose dimension and condition lie somewhere in between those of K and AH −1 A T. The approach applies to linear programs and to positive semidefinite quadratic programs whose Hessian H is at least partially diagonal. We have implemented reduced KKT systems in a primaldual algorithm for linear programming, based on the sparse indefinite solver MA27 from the Harwell Subroutine Library. Some features of the algorithm are presented, along with results on the netlib LP test set.
Scalable Parallel Benders Decomposition for Stochastic Linear Programming
 Parallel Computing
, 1997
"... We develop a scalable parallel implementation of the classical Benders decomposition algorithm for twostage stochastic linear programs. Using a primaldual, pathfollowing algorithm for solving the scenario subproblems we develop a parallel implementation that alleviates the difficulties of load ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
We develop a scalable parallel implementation of the classical Benders decomposition algorithm for twostage stochastic linear programs. Using a primaldual, pathfollowing algorithm for solving the scenario subproblems we develop a parallel implementation that alleviates the difficulties of load balancing. Furthermore, the dual and primal step calculations can be implemented using a dataparallel programming paradigm. With this approach the code effectively utilizes both the multiple, independent, processors and the vector units of the target architecture, the Connection Machine CM5. The, usually limiting, master program is solved very efficiently using the interior point code LoQo on the frontend workstation. The implementation scales almost perfectly with problem and machine size. Extensive computational testing is reported with several large problems with up to 2 million constraints and 13.8 million variables. 1 Introduction Stochastic programming offers a way to dea...