Results 1  10
of
18
LARGESCALE LINEARLY CONSTRAINED OPTIMIZATION
, 1978
"... An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is ..."
Abstract

Cited by 108 (21 self)
 Add to MetaCart
An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is described, along with computational experience on a wide variety of problems.
Solving RealWorld Linear Programs: A Decade and More of Progress
 Operations Research
, 2002
"... This paper is an invited contribution to the 50th anniversary issue of the journal Operations Research, published by the Institute of Operations Research and Management Science (INFORMS). It describes one persons perspective on the development of computational tools for linear programming. The pape ..."
Abstract

Cited by 81 (3 self)
 Add to MetaCart
(Show Context)
This paper is an invited contribution to the 50th anniversary issue of the journal Operations Research, published by the Institute of Operations Research and Management Science (INFORMS). It describes one persons perspective on the development of computational tools for linear programming. The paper begins with a short, personal history, followed by historical remarks covering the some 40 years of linearprogramming developments that predate my own involvement in this subject. It concludes with a more detailed look at the evolution of computational linear programming since 1987. 2
SPARSE MATRIX METHODS IN OPTIMIZATION
, 1984
"... Optimization algorithms typically require the solution of many systems of linear equations Bkyk b,. When large numbers of variables or constraints are present, these linear systems could account for much of the total computation time. Both direct and iterative equation solvers are needed in practi ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
Optimization algorithms typically require the solution of many systems of linear equations Bkyk b,. When large numbers of variables or constraints are present, these linear systems could account for much of the total computation time. Both direct and iterative equation solvers are needed in practice. Unfortunately, most of the offtheshelf solvers are designed for single systems, whereas optimization problems give rise to hundreds or thousands of systems. To avoid refactorization, or to speed the convergence of an iterative method, it is essential to note that B is related to Bk _ 1. We review various sparse matrices that arise in optimization, and discuss compromises that are currently being made in dealing with them. Since significant advances continue to be made with singlesystem solvers, we give special attention to methods that allow such solvers to be used repeatedly on a sequence of modified systems (e.g., the productform update; use of the Schur complement). The speed of factorizing a matrix then becomes relatively less important than the efficiency of subsequent solves with very many righthand sides. At the same time, we hope that future improvements to linearequation software will be oriented more specifically to the case of related matrices B k.
An Approximate Minimum Degree Column Ordering Algorithm
, 1998
"... An approximate minimum degree column ordering algorithm (COLAMD) for preordering an unsymmetric sparse matrix A prior to... ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
(Show Context)
An approximate minimum degree column ordering algorithm (COLAMD) for preordering an unsymmetric sparse matrix A prior to...
Improving The Numerical Stability And The Performance Of A Parallel Sparse Solver
 Computers Math. Applic
"... Coarse grain parallel codes for solving sparse systems of linear algebraic equations can be developed in several different ways. The following procedure is suitable for some parallel computers. A preliminary reordering of the matrix is first applied to move as many zero elements as possible to the l ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Coarse grain parallel codes for solving sparse systems of linear algebraic equations can be developed in several different ways. The following procedure is suitable for some parallel computers. A preliminary reordering of the matrix is first applied to move as many zero elements as possible to the lower left corner. After that the matrix is partitioned into large blocks and the blocks in the lower left corner contain only zero elements. An attempt to obtain a good loadbalance is carried out by allowing the diagonal blocks to be rectangular. While the algorithm based on the above ideas has good parallel properties, some stability problems may arise during the factorization because the pivotal search is restricted to the diagonal blocks. A simple a priori procedure has been used in a previous version in an attempt to stabilize the algorithm. In this paper it is shown that three enhanced stability devices can successfully be incorporated in the algorithm so that it is further stabilized ...
Alternative methods for representing the inverse of linear programming basis matrices, to appear
 in the Progress in Mathematical Programming 19751989, Special publication, Australian Society of Operational Research, Editor
, 1990
"... ..."
(Show Context)
USE OF THE p4 AND pS ALGORITHMS FOR INCORE FACTORIZATION OF SPARSE MATRICES*
"... Abstract. Variants of the p4 algorithm of Hellerman and Rarick and the p5 algorithm of Erisman, Grimes, Lewis, and Poole, used for generating a bordered block triangular form for the incore solution of sparse sets of linear equations, are considered. A particular concern is with maintaining numeric ..."
Abstract
 Add to MetaCart
Abstract. Variants of the p4 algorithm of Hellerman and Rarick and the p5 algorithm of Erisman, Grimes, Lewis, and Poole, used for generating a bordered block triangular form for the incore solution of sparse sets of linear equations, are considered. A particular concern is with maintaining numerical stability. Methods for ensuring stability and the extra cost that they entail are discussed. Different factorization schemes are also examined. The uses of matrix modification and iterative refinement are considered, and the best variant is compared with an established code for the solution of unsymmetric sparse sets of linear equations. The established code is usually found to be the most effective method.
Reordering of Sparse Matrices for Parallel Processing
, 1994
"... this report is based on ideas from graph theory. Graph theory has often been used in sparse matrix studiesespecially in connection with symmetric positive definite systems [31]. However, the use of graph theory in connection with general sparse matrices is not as widespread, although some applica ..."
Abstract
 Add to MetaCart
this report is based on ideas from graph theory. Graph theory has often been used in sparse matrix studiesespecially in connection with symmetric positive definite systems [31]. However, the use of graph theory in connection with general sparse matrices is not as widespread, although some applications exist, based on bipartite graphs; see, e.g., [32]. The application used in this work is different from the abovementioned applications.
network flows
, 1994
"... This work presents a new code for solving the multicommodity network flow problem with a linear or nonlinear objective function considering additional linear side constraints that link arcs of the same or different commodities. For the multicommodity network flow problem through primal partitioning ..."
Abstract
 Add to MetaCart
This work presents a new code for solving the multicommodity network flow problem with a linear or nonlinear objective function considering additional linear side constraints that link arcs of the same or different commodities. For the multicommodity network flow problem through primal partitioning the code implements a specialization of Murtagh and Saunders ’ strategy of dividing the set of variables into basic, nonbasic and superbasic. Several tests are reported, using random problems obtained from different network generators and real problems arising from the fields of long and shortterm hydrothermal scheduling of electricity generation and traffic assignment, with sizes of up to 150000 variables and 45 000 constraints. The performance of the code developed is compared to that of alternative methodologies for solving the same problems: a general purpose linear and nonlinear constrained optimization code, a specialised linear multicommodity network flow code and a primaldual interior point code.