Results 1  10
of
12
LARGESCALE LINEARLY CONSTRAINED OPTIMIZATION
, 1978
"... An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is ..."
Abstract

Cited by 108 (21 self)
 Add to MetaCart
An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is described, along with computational experience on a wide variety of problems.
BorderBlock Triangular Form and Conjunction Schedule in Image Computation
 in Formal Methods in ComputerAided Design
, 2000
"... . Conjunction scheduling in image computation consists of clustering the parts of a transition relation and ordering the clusters, so that the size of the BDDs for the intermediate results of image computation stay small. We present an approach based on the analysis and permutation of the depende ..."
Abstract

Cited by 40 (6 self)
 Add to MetaCart
. Conjunction scheduling in image computation consists of clustering the parts of a transition relation and ordering the clusters, so that the size of the BDDs for the intermediate results of image computation stay small. We present an approach based on the analysis and permutation of the dependence matrix of the transition relation. Our algorithm computes a borderedblock lower triangular form of the matrix that heuristically minimized the active lifetime of variables, that is, the number of conjunctions in which the variables participate. The ordering procedure guides a clustering algorithm based on the affinity of the transition relation parts. The ordering procedure is then applied again to define the cluster conjunction schedule. Our experimental results show the effectiveness of the new algorithm. 1 Introduction Symbolic algorithms for model checking [11] spend most of the time computing the predecessors or successors of sets of states. The algorithms for these image ...
Solving Large Nonsymmetric Sparse Linear Systems Using MCSPARSE
 PARALLEL COMPUTING
, 1996
"... ..."
(Show Context)
A New Row Ordering Strategy for Frontal Solvers.
 Numerical Linear Algebra with Applications
, 1998
"... The frontal method is a variant of Gaussian elimination that has been widely used since the mid 1970s. In the innermost loop of the computation the method exploits dense linear algebra kernels, which are straightforward to vectorize and parallelize. This makes the method attractive for modern comput ..."
Abstract

Cited by 12 (10 self)
 Add to MetaCart
The frontal method is a variant of Gaussian elimination that has been widely used since the mid 1970s. In the innermost loop of the computation the method exploits dense linear algebra kernels, which are straightforward to vectorize and parallelize. This makes the method attractive for modern computer architectures. However, unless the matrix can be ordered so that the front is never very large, frontal methods can require many more floatingpoint operations for factorization than other approaches. We use the idea of a row graph of an unsymmetric matrix combined with a variant of Sloan's profile reduction algorithm to reorder the rows. We also look at using the spectral method applied to the row graph. Numerical experiments are performed on a range of practical problems. Our new row ordering algorithm is shown to produce orderings that are a significant improvement on those obtained with existing algorithms. Numerical results also compare the performance of the frontal solver MA42 on t...
Frontal solvers for process engineering: local row ordering strategies
 Computers in Chemical Engineering
, 1998
"... 1 Author to whom all correspondence should be addressed The solution of chemical process simulation and optimization problems on today's high performance supercomputers requires algorithms that can take advantage of vector and parallel processing when solving the large, sparse matrices that ar ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
1 Author to whom all correspondence should be addressed The solution of chemical process simulation and optimization problems on today's high performance supercomputers requires algorithms that can take advantage of vector and parallel processing when solving the large, sparse matrices that arise. The frontal method can be highly e cient in this context due to its ability to make use of vectorizable dense matrix kernels on a relatively small frontal matrix in the innermost loop of the computation. However, the ordering of the rows in the coe cient matrix strongly a ects size of the frontal matrix and thus the solution time. If a poor row ordering is used it may make the frontal method uncompetitive with other methods. We describe here a graph theoretical framework for identifying suitable row orderings that speci cally addresses the issue of frontal matrix size. This leads to local, heuristic methods which aim to limit frontal matrix growth in the row and/or column dimensions. Results on a wide range of test problems indicate that improvements in frontal solver performance can often be obtained by the use of a restricted minimum column degree heuristic, which can be viewed as a variation of the minimum degree heuristic used in other contexts. Results also indicate that the natural unitblock structure of process simulation problems provides a quite reasonable ordering. 1
The implementation of a Lagrangianbased algorithm for sparse nonlinear constraints
, 1980
"... Reproduction in whole or in part is permitted for any purpose ofthe United States Government. This document has been approved for public release and sale; its distribution is unlimited. r ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Reproduction in whole or in part is permitted for any purpose ofthe United States Government. This document has been approved for public release and sale; its distribution is unlimited. r
Alternative methods for representing the inverse of linear programming basis matrices, to appear
 in the Progress in Mathematical Programming 19751989, Special publication, Australian Society of Operational Research, Editor
, 1990
"... ..."
(Show Context)
Block Triangular Orderings and Factors for Sparse Matrices in LP
, 1997
"... Sparse matrix methods for factorizing a nonsingular matrix are considered. The possibility of using the little known technique of implicit LU factors is explored, particularly in the context of simplexlike methods for Linear Programming. The concept of a spikepreserving ordering is introduced a ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Sparse matrix methods for factorizing a nonsingular matrix are considered. The possibility of using the little known technique of implicit LU factors is explored, particularly in the context of simplexlike methods for Linear Programming. The concept of a spikepreserving ordering is introduced and a new method for calculating such an ordering is described, based on the recursive use of Tarjan's algorithm for block triangularization. Experiments are described in which the new method is compared to that based on the use of Markowitz orderings. 1
Reordering of Sparse Matrices for Parallel Processing
, 1994
"... this report is based on ideas from graph theory. Graph theory has often been used in sparse matrix studiesespecially in connection with symmetric positive definite systems [31]. However, the use of graph theory in connection with general sparse matrices is not as widespread, although some applica ..."
Abstract
 Add to MetaCart
this report is based on ideas from graph theory. Graph theory has often been used in sparse matrix studiesespecially in connection with symmetric positive definite systems [31]. However, the use of graph theory in connection with general sparse matrices is not as widespread, although some applications exist, based on bipartite graphs; see, e.g., [32]. The application used in this work is different from the abovementioned applications.