Results 1  10
of
16
Robust Ordering of Sparse Matrices using Multisection
 Department of Computer Science, York University
, 1996
"... In this paper we provide a robust reordering scheme for sparse matrices. The scheme relies on the notion of multisection, a generalization of bisection. The reordering strategy is demonstrated to have consistently good performance in terms of fill reduction when compared with multiple minimum degree ..."
Abstract

Cited by 48 (2 self)
 Add to MetaCart
In this paper we provide a robust reordering scheme for sparse matrices. The scheme relies on the notion of multisection, a generalization of bisection. The reordering strategy is demonstrated to have consistently good performance in terms of fill reduction when compared with multiple minimum degree and generalized nested dissection. Experimental results show that by using multisection, we obtain an ordering which is consistently as good as or better than both for a wide spectrum of sparse problems. 1 Introduction It is well recognized that finding a fillreducing ordering is crucial in the success of the numerical solution of sparse linear systems. For symmetric positivedefinite systems, the minimum degree [38] and the nested dissection [11] orderings are perhaps the most popular ordering schemes. They represent two opposite approaches to the ordering problem. However, they share a common undesirable characteristic. Both schemes produce generally good orderings, but the ordering qua...
Towards a tighter coupling of bottomup and topdown sparse matrix ordering methods
 BIT
, 2001
"... Most stateoftheart ordering schemes for sparse matrices are a hybrid of a bottomup method such as minimum degree and a top down scheme such as George's nested dissection. In this paper we present an ordering algorithm that achieves a tighter coupling of bottomup and topdown methods. In our ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
Most stateoftheart ordering schemes for sparse matrices are a hybrid of a bottomup method such as minimum degree and a top down scheme such as George's nested dissection. In this paper we present an ordering algorithm that achieves a tighter coupling of bottomup and topdown methods. In our methodology vertex separators are interpreted as the boundaries of the remaining elements in an unfinished bottomup ordering. As a consequence, we are using bottomup techniques such as quotient graphs and special node selection strategies for the construction of vertex separators. Once all separators have been found, we are using them as a skeleton for the computation of several bottomup orderings. Experimental results show that the orderings obtained by our scheme are in general better than those obtained by other popular ordering codes.
NestedDissection Orderings For Sparse Lu With Partial Pivoting
 SIAM J. Matrix Anal. Appl
, 2000
"... . We describe the implementation and performance of a novel fillminimization ordering technique for sparse LU factorization with partial pivoting. The technique was proposed by Gilbert and Schreiber in 1980 but never implemented and tested. Like other techniques for ordering sparse matrices for ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
. We describe the implementation and performance of a novel fillminimization ordering technique for sparse LU factorization with partial pivoting. The technique was proposed by Gilbert and Schreiber in 1980 but never implemented and tested. Like other techniques for ordering sparse matrices for LU with partial pivoting, our new method preorders the columns of the matrix (the row permutation is chosen by the pivoting sequence during the numerical factorization). Also like other methods, the column permutation Q that we select is a permutation that minimizes the fill in the Cholesky factor of Q T A T AQ. Unlike existing columnordering techniques, which all rely on minimumdegree heuristics, our new method is based on a nesteddissection ordering of A T A. Our algorithm, however, never computes a representation of A T A, which can be expensive. We only work with a representation of A itself. Our experiments demonstrate that the method is e#cient and that it can reduce fill significantly relative to the best existing methods. The method reduces the LU running time on some very large matrices (tens of millions of nonzeros in the factors) by more than a factor of 2. 1.
Sparse Numerical Linear Algebra: Direct Methods and Preconditioning
, 1996
"... Most of the current techniques for the direct solution of linear equations are based on supernodal or multifrontal approaches. An important feature of these methods is that arithmetic is performed on dense submatrices and Level 2 and Level 3 BLAS (matrixvector and matrixmatrix kernels) can be us ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
Most of the current techniques for the direct solution of linear equations are based on supernodal or multifrontal approaches. An important feature of these methods is that arithmetic is performed on dense submatrices and Level 2 and Level 3 BLAS (matrixvector and matrixmatrix kernels) can be used. Both sparse LU and QR factorizations can be implemented within this framework. Partitioning and ordering techniques have seen major activity in recent years. We discuss bisection and multisection techniques, extensions to orderings to block triangular form, and recent improvements and modifications to standard orderings such as minimum degree. We also study advances in the solution of indefinite systems and sparse leastsquares problems. The desire to exploit parallelism has been responsible for many of the developments in direct methods for sparse matrices over the last ten years. We examine this aspect in some detail, illustrating how current techniques have been developed or ...
Performance Of Greedy Ordering Heuristics For Sparse Cholesky Factorization
, 1997
"... . Greedy algorithms for ordering sparse matrices for Cholesky factorization can be based on different metrics. Minimum degree, a popular and effective greedy ordering scheme, minimizes the number of nonzero entries in the rank1 update (degree) at each step of the factorization. Alternatively, minim ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
. Greedy algorithms for ordering sparse matrices for Cholesky factorization can be based on different metrics. Minimum degree, a popular and effective greedy ordering scheme, minimizes the number of nonzero entries in the rank1 update (degree) at each step of the factorization. Alternatively, minimum deficiency minimizes the number of nonzero entries introduced (deficiency) at each step of the factorization. In this paper we develop two new heuristics: "modified minimum deficiency" (MMDF) and "modified multiple minimum degree" (MMMD). The former uses a metric similar to deficiency while the latter uses a degreelike metric. Our experiments reveal that on the average MMDF has 21% fewer operations to factor than minimum degree; MMMD has 15% fewer operations to factor than minimum degree. MMMD is no more expensive to compute than minimum degree while MMDF requires on the average 30% more time than minimum degree. Key words. sparse matrix ordering, minimum degree, minimum deficiency, gre...
Graph partitioning into isolated, high conductance clusters: theory, computation and . . .
, 2008
"... ..."
Multifrontal Computation with the Orthogonal Factors of Sparse Matrices
 SIAM Journal on Matrix Analysis and Applications
, 1994
"... . This paper studies the solution of the linear least squares problem for a large and sparse m by n matrix A with m n by QR factorization of A and transformation of the righthand side vector b to Q T b. A multifrontalbased method for computing Q T b using Householder factorization is presented ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
. This paper studies the solution of the linear least squares problem for a large and sparse m by n matrix A with m n by QR factorization of A and transformation of the righthand side vector b to Q T b. A multifrontalbased method for computing Q T b using Householder factorization is presented. A theoretical operation count for the K by K unbordered grid model problem and problems defined on graphs with p nseparators shows that the proposed method requires O(NR ) storage and multiplications to compute Q T b, where NR = O(n log n) is the number of nonzeros of the upper triangular factor R of A. In order to introduce BLAS2 operations, Schreiber and Van Loan's StorageEfficientWY Representation [SIAM J. Sci. Stat. Computing, 10(1989),pp. 5557] is applied for the orthogonal factor Q i of each frontal matrix F i . If this technique is used, the bound on storage increases to O(n(logn) 2 ). Some numerical results for the grid model problems as well as HarwellBoeing problems...
Coarsening, Sampling, And Smoothing: Elements Of The Multilevel Method
 Parallel Processing, IMA Volumes in Mathematics and its Applications, 105, Springer Verlag:247–276
, 1999
"... . The multilevel method has emerged as one of the most effective methods for solving numerical and combinatorial problems. It has been used in multigrid, domain decomposition, geometric search structures, as well as optimization algorithms for problems such as partitioning and sparsematrix ordering ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
. The multilevel method has emerged as one of the most effective methods for solving numerical and combinatorial problems. It has been used in multigrid, domain decomposition, geometric search structures, as well as optimization algorithms for problems such as partitioning and sparsematrix ordering. This paper presents a systematic treatment of the fundamental elements of the multilevel method. We illustrate, using examples from several fields, the importance and effectiveness of coarsening, sampling, and smoothing (local optimization) in the application of the multilevel method. Key words. Algorithmdesign paradigm, coarsening, combinatorial optimization, Delaunay triangulation, domain decomposition, eigenvalue problems, Gaussian elimination, geometric methods, graph partitioning, hierarchical methods, multigrid, multilevel methods, nested dissection, sampling, smoothing, spectral methods. AMS(MOS) subject classifications. Primary 1234, 5678, 9101112. 1. Introduction. The multilev...
Computing the Rank of Large Sparse Matrices over Finite Fields
"... We want to achieve efficient exact computations, such as the rank, of sparse matrices over finite fields. We therefore compare the practical behaviors, on a wide range of sparse matrices of the deterministic Gaussian elimination technique, using reordering heuristics, with the probabilistic, blackbo ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We want to achieve efficient exact computations, such as the rank, of sparse matrices over finite fields. We therefore compare the practical behaviors, on a wide range of sparse matrices of the deterministic Gaussian elimination technique, using reordering heuristics, with the probabilistic, blackbox, Wiedemann algorithm. Indeed, we prove here that the latter is the fastest iterative variant of the Krylov methods to compute the minimal polynomial or the rank of a sparse matrix.
Tradeoffs Between Parallelism and Fill in Nested Dissection
, 1999
"... In this paper we demonstrate that tradeoffs can be made between parallelism and fill in nested dissection algorithms for Gaussian elimination, both in theory and in practice. We present a new "less parallel nested dissection" algorithm (LPND), and prove that, unlike the standard nested dis ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In this paper we demonstrate that tradeoffs can be made between parallelism and fill in nested dissection algorithms for Gaussian elimination, both in theory and in practice. We present a new "less parallel nested dissection" algorithm (LPND), and prove that, unlike the standard nested dissection algorithm, when applied to a chordal graph LPND finds a zerofill elimination order. We have also implemented the LPND algorithm. On a variety of benchmarks it generates less fill than stateoftheart implementations of the nested dissection (METIS), minimumdegree (AMD), and hybrid (BEND) algorithms on a large body of test matrices. We have also implemented another nested dissection algorithm that is different from METIS and that uses the same separator algorithm used by our implementation of LPND. This algorithm, as well as LPND, generates less fill than METIS, and on large graphs significantly outperforms AMD. The latter comparison is notable, because although it is known that, for certain...