Results 1 
6 of
6
LARGESCALE LINEARLY CONSTRAINED OPTIMIZATION
, 1978
"... An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is descr ..."
Abstract

Cited by 75 (11 self)
 Add to MetaCart
An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is described, along with computational experience on a wide variety of problems.
Solving RealWorld Linear Programs: A Decade and More of Progress
 Operations Research
, 2002
"... This paper is an invited contribution to the 50th anniversary issue of the journal Operations Research, published by the Institute of Operations Research and Management Science (INFORMS). It describes one persons perspective on the development of computational tools for linear programming. The pape ..."
Abstract

Cited by 59 (1 self)
 Add to MetaCart
This paper is an invited contribution to the 50th anniversary issue of the journal Operations Research, published by the Institute of Operations Research and Management Science (INFORMS). It describes one persons perspective on the development of computational tools for linear programming. The paper begins with a short, personal history, followed by historical remarks covering the some 40 years of linearprogramming developments that predate my own involvement in this subject. It concludes with a more detailed look at the evolution of computational linear programming since 1987. 2
An Approximate Minimum Degree Column Ordering Algorithm
, 1998
"... An approximate minimum degree column ordering algorithm (COLAMD) for preordering an unsymmetric sparse matrix A prior to... ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
An approximate minimum degree column ordering algorithm (COLAMD) for preordering an unsymmetric sparse matrix A prior to...
Improving The Numerical Stability And The Performance Of A Parallel Sparse Solver
 Computers Math. Applic
"... Coarse grain parallel codes for solving sparse systems of linear algebraic equations can be developed in several different ways. The following procedure is suitable for some parallel computers. A preliminary reordering of the matrix is first applied to move as many zero elements as possible to the l ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Coarse grain parallel codes for solving sparse systems of linear algebraic equations can be developed in several different ways. The following procedure is suitable for some parallel computers. A preliminary reordering of the matrix is first applied to move as many zero elements as possible to the lower left corner. After that the matrix is partitioned into large blocks and the blocks in the lower left corner contain only zero elements. An attempt to obtain a good loadbalance is carried out by allowing the diagonal blocks to be rectangular. While the algorithm based on the above ideas has good parallel properties, some stability problems may arise during the factorization because the pivotal search is restricted to the diagonal blocks. A simple a priori procedure has been used in a previous version in an attempt to stabilize the algorithm. In this paper it is shown that three enhanced stability devices can successfully be incorporated in the algorithm so that it is further stabilized ...
Reordering of Sparse Matrices for Parallel Processing
, 1994
"... this report is based on ideas from graph theory. Graph theory has often been used in sparse matrix studiesespecially in connection with symmetric positive definite systems [31]. However, the use of graph theory in connection with general sparse matrices is not as widespread, although some applica ..."
Abstract
 Add to MetaCart
this report is based on ideas from graph theory. Graph theory has often been used in sparse matrix studiesespecially in connection with symmetric positive definite systems [31]. However, the use of graph theory in connection with general sparse matrices is not as widespread, although some applications exist, based on bipartite graphs; see, e.g., [32]. The application used in this work is different from the abovementioned applications.
SPARSE MATRIX METHODS IN OPTIMIZATION*
"... Abstract. Optimization algorithms typically require the solution of many systems of linear equations Bkyk b,. When large numbers of variables or constraints are present, these linear systems could account for much of the total computation time. Both direct and iterative equation solvers are needed i ..."
Abstract
 Add to MetaCart
Abstract. Optimization algorithms typically require the solution of many systems of linear equations Bkyk b,. When large numbers of variables or constraints are present, these linear systems could account for much of the total computation time. Both direct and iterative equation solvers are needed in practice. Unfortunately, most of the offtheshelf solvers are designed for single systems, whereas optimization problems give rise to hundreds or thousands of systems. To avoid refactorization, or to speed the convergence of an iterative method, it is essential to note that B is related to Bk _ 1. We review various sparse matrices that arise in optimization, and discuss compromises that are currently being made in dealing with them. Since significant advances continue to be made with singlesystem solvers, we give special attention to methods that allow such solvers to be used repeatedly on a sequence of modified systems (e.g., the productform update; use of the Schur complement). The speed of factorizing a matrix then becomes relatively less important than the efficiency of subsequent solves with very many righthand sides. At the same time, we hope that future improvements to linearequation software will be oriented more specifically to the case of related matrices B k.