Results 1  10
of
19
Towards a CostEffective ILU Preconditioner with Higher Level Fills
, 1992
"... A recently proposed Minimum Discarded Fill #MDF # ordering #or pivoting# technique is e#ective in #nding high quality ILU ### preconditioners# especially for problems arising from unstruc# tured #nite element grids. This algorithm can identify anisotropy in complicated physical structures and orders ..."
Abstract

Cited by 29 (10 self)
 Add to MetaCart
A recently proposed Minimum Discarded Fill #MDF # ordering #or pivoting# technique is e#ective in #nding high quality ILU ### preconditioners# especially for problems arising from unstruc# tured #nite element grids. This algorithm can identify anisotropy in complicated physical structures and orders the unknowns in a #preferred# direction. However# the MDF ordering is costly# when # increases. In this paper# several less expensive variants of the MDF technique are explored to produce cost# e#ective ILU preconditioners. The Incomplete MDF and Threshold MDF orderings combine MDF ideas with drop tolerance techniques to identify the sparsity pattern in the ILU preconditioners. These techniques produce orderings that encourage fast decay of the entries in the ILU factorization. The Minimum Update Matrix #MUM # ordering technique is a simpli#cation of the MDF ordering and is an analogue of the minimum degree algorithm. The MUM ordering is especially e#ective for large matrices arising from Navier#Stokes problems. Key Words. minimum discarded #ll#MDF ## incomplete MDF # threshold MDF # minimum up# dating matrix#MUM ## incomplete factorization# matrix ordering# preconditioned conjugate gradient# high#order ILU factorization. AMS#MOS# subject classi#cation. 65F10# 76S05 1.
Weighted graph based ordering techniques for preconditioned conjugate gradient methods
 BIT Numerical Mathematics
, 1995
"... ..."
Nested dissection: A survey and comparison of various nested dissection algorithms
, 1992
"... Methods for solving sparse linear systems of equations can be categorized under two broad classes direct and iterative. Direct methods are methods based on gaussian elimination. This report discusses one such direct method namely Nested dissection. Nested Dissection, originally proposed by Alan Geo ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Methods for solving sparse linear systems of equations can be categorized under two broad classes direct and iterative. Direct methods are methods based on gaussian elimination. This report discusses one such direct method namely Nested dissection. Nested Dissection, originally proposed by Alan George, is a technique for solving sparse linear systems efficiently. This report is a survey of some of the work in the area of nested dissection and attempts to put it together using a common framework.
Diagonal markowitz scheme with local symmetrization
 SIAM J. Matrix Anal. Appl
, 2003
"... y work of this author was performed while he was on a sabbatical visit to NERSC. ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
y work of this author was performed while he was on a sabbatical visit to NERSC.
Combinatorial scientific computing: The enabling power of discrete algorithms in computational science
 In 7th Intl. Mtg. High Perf. Comput. for Computational Sci. (VECPAR’06), Lecture Notes in Computer Science
, 2006
"... Abstract. Combinatorial algorithms have long played a crucial, albeit underrecognized role in scientific computing. This impact ranges well beyond the familiar applications of graph algorithms in sparse matrices to include mesh generation, optimization, computational biology and chemistry, data ana ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract. Combinatorial algorithms have long played a crucial, albeit underrecognized role in scientific computing. This impact ranges well beyond the familiar applications of graph algorithms in sparse matrices to include mesh generation, optimization, computational biology and chemistry, data analysis and parallelization. Trends in science and in computing suggest strongly that the importance of discrete algorithms in computational science will continue to grow. This paper reviews some of these many past successes and highlights emerging areas of promise and opportunity. 1
Beware Of Unperturbed Modified Incomplete Factorizations
 in Iterative Methods in Linear Algebra
, 1991
"... . A short note describes the possible dangers of combining (unperturbed) modified incomplete factorizations with certain ordering strategies of the matrix. Sufficient conditions are given for orderings that lead to zero or vanishing pivots, and numerical tests are given illustrating the latter pheno ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
. A short note describes the possible dangers of combining (unperturbed) modified incomplete factorizations with certain ordering strategies of the matrix. Sufficient conditions are given for orderings that lead to zero or vanishing pivots, and numerical tests are given illustrating the latter phenomemon. In that case introducing perturbations of O(h 2 ) seems to alleviate the problem. 1. Introduction. It is part of numerical folklore that the combination of redblack ordering and modified incomplete factorizations leads to zero pivots. This fact has been mentioned in, for instance, [16] and [10], and in a larger context it is part of the existence analysis of incomplete factorizations. Conditions for the existence of incomplete factorizations have been derived in [18] (essentially, strict diagonal dominance) , [8] (lower semistrict diagonal dominance), and [19]. In this last reference a necessary and sufficient condition is given for the case of an irreducible, symmetric, Mmatrix....
Tradeoffs Between Parallelism and Fill in Nested Dissection
, 1999
"... In this paper we demonstrate that tradeoffs can be made between parallelism and fill in nested dissection algorithms for Gaussian elimination, both in theory and in practice. We present a new "less parallel nested dissection" algorithm (LPND), and prove that, unlike the standard nested dis ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In this paper we demonstrate that tradeoffs can be made between parallelism and fill in nested dissection algorithms for Gaussian elimination, both in theory and in practice. We present a new "less parallel nested dissection" algorithm (LPND), and prove that, unlike the standard nested dissection algorithm, when applied to a chordal graph LPND finds a zerofill elimination order. We have also implemented the LPND algorithm. On a variety of benchmarks it generates less fill than stateoftheart implementations of the nested dissection (METIS), minimumdegree (AMD), and hybrid (BEND) algorithms on a large body of test matrices. We have also implemented another nested dissection algorithm that is different from METIS and that uses the same separator algorithm used by our implementation of LPND. This algorithm, as well as LPND, generates less fill than METIS, and on large graphs significantly outperforms AMD. The latter comparison is notable, because although it is known that, for certain...
Matrix Methods
, 1998
"... We consider techniques for the solution of linear systems and eigenvalue problems. We are concerned with largescale applications where the matrix will be large and sparse. We discuss both direct and iterative techniques for the solution of sparse equations, contrasting their strengths and weaknesse ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We consider techniques for the solution of linear systems and eigenvalue problems. We are concerned with largescale applications where the matrix will be large and sparse. We discuss both direct and iterative techniques for the solution of sparse equations, contrasting their strengths and weaknesses and emphasizing that combinations of both are necessary in the arsenal of the applications scientist. We briefly review matrix diagonalization techniques for largescale problems.
Parallel Gaussian Elimination with Linear Work and Fill
 In Proceedings of the ThirtyEighth Annual Symposium on Foundations of Computer Science
, 1997
"... This paper presents an algorithm for finding parallel elimination orderings for Gaussian elimination. Viewing a system of equations as a graph, the algorithm can be applied directly to interval graphs and chordal graphs. For general graphs, the algorithm can be used to parallelize the ordering produ ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
This paper presents an algorithm for finding parallel elimination orderings for Gaussian elimination. Viewing a system of equations as a graph, the algorithm can be applied directly to interval graphs and chordal graphs. For general graphs, the algorithm can be used to parallelize the ordering produced by some other heuristic such as minimum degree. In this case, the algorithm is applied to the chordal completion that the heuristic generates from the input graph. In general, the input to the algorithm is a chordal graph G with n nodes and m edges. The algorithm produces an ordering with height at most O(log 3 n) times optimal, fill at most O(m), and work at most O(W (G)), where W (G) is the minimum possible work over all elimination orderings for G. Experimental results show that when applied after some other heuristic, the increase in work and fill is usually small. In some instances the algorithm obtains an ordering that is actually better, in terms of work and fill, than t...