Results 1  10
of
21
Preconditioning techniques for large linear systems: A survey
 J. COMPUT. PHYS
, 2002
"... This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization i ..."
Abstract

Cited by 105 (5 self)
 Add to MetaCart
This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization issues, and block and multilevel extensions. Some of the challenges ahead are also discussed. An extensive bibliography completes the paper.
hypre: a Library of High Performance Preconditioners
 Preconditioners,” Lecture Notes in Computer Science
, 2002
"... hypre is a software library for the solution of large, sparse linear systems on massively parallel computers. Its emphasis is on modern powerful and scalable preconditioners. hypre provides various conceptual interfaces to enable application users to access the library in the way they naturally ..."
Abstract

Cited by 44 (1 self)
 Add to MetaCart
hypre is a software library for the solution of large, sparse linear systems on massively parallel computers. Its emphasis is on modern powerful and scalable preconditioners. hypre provides various conceptual interfaces to enable application users to access the library in the way they naturally think about their problems. This paper presents the conceptual interfaces in hypre. An overview of the preconditioners that are available in hypre is given, including some numerical results that show the eciency of the library.
What color is your Jacobian? Graph coloring for computing derivatives
 SIAM REV
, 2005
"... Graph coloring has been employed since the 1980s to efficiently compute sparse Jacobian and Hessian matrices using either finite differences or automatic differentiation. Several coloring problems occur in this context, depending on whether the matrix is a Jacobian or a Hessian, and on the specific ..."
Abstract

Cited by 41 (7 self)
 Add to MetaCart
Graph coloring has been employed since the 1980s to efficiently compute sparse Jacobian and Hessian matrices using either finite differences or automatic differentiation. Several coloring problems occur in this context, depending on whether the matrix is a Jacobian or a Hessian, and on the specifics of the computational techniques employed. We consider eight variant vertexcoloring problems here. This article begins with a gentle introduction to the problem of computing a sparse Jacobian, followed by an overview of the historical development of the research area. Then we present a unifying framework for the graph models of the variant matrixestimation problems. The framework is based upon the viewpoint that a partition of a matrixinto structurally orthogonal groups of columns corresponds to distance2 coloring an appropriate graph representation. The unified framework helps integrate earlier work and leads to fresh insights; enables the design of more efficient algorithms for many problems; leads to new algorithms for others; and eases the task of building graph models for new problems. We report computational results on two of the coloring problems to support our claims. Most of the methods for these problems treat a column or a row of a matrixas an atomic entity, and partition the columns or rows (or both). A brief review of methods that do not fit these criteria is provided. We also discuss results in discrete mathematics and theoretical computer science that intersect with the topics considered here.
pARMS: a Parallel Version of the Algebraic Recursive Multilevel Solver
, 2001
"... A parallel version of the Algebraic Recursive Multilevel Solver (ARMS) is developed for distributed computing environments. The method adopts the general framework of distributed sparse matrices and relies on solving the resulting distributed Schur complement system. Numerical experiments are pre ..."
Abstract

Cited by 26 (12 self)
 Add to MetaCart
A parallel version of the Algebraic Recursive Multilevel Solver (ARMS) is developed for distributed computing environments. The method adopts the general framework of distributed sparse matrices and relies on solving the resulting distributed Schur complement system. Numerical experiments are presented which compare these approaches on regularly and irregularly structured problems.
Computational experience with sequential and parallel, preconditioned Jacobi–Davidson for large, sparse symmetric matrices
, 2003
"... ..."
MSP: a class of parallel multistep successive sparse approximate inverse preconditioning strategies
 SIAM J. Sci. Comput
, 2002
"... Abstract. We develop a class of parallel multistep successive preconditioning strategies to enhance efficiency and robustness of standard sparse approximate inverse preconditioning techniques. The key idea is to compute a series of simple sparse matrices to approximate the inverse of the original ma ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Abstract. We develop a class of parallel multistep successive preconditioning strategies to enhance efficiency and robustness of standard sparse approximate inverse preconditioning techniques. The key idea is to compute a series of simple sparse matrices to approximate the inverse of the original matrix. Studies are conducted to show the advantages of such an approach in terms of both improving preconditioning accuracy and reducing computational cost, compared to the standard sparse approximate inverse preconditioners. Numerical experiments using one prototype implementation to solve a few sparse matrices on a distributed memory parallel computer are reported.
DISTRIBUTEDMEMORY PARALLEL ALGORITHMS FOR DISTANCE2 COLORING AND THEIR APPLICATION TO DERIVATIVE COMPUTATION
, 2010
"... The distance2 graph coloring problem aims at partitioning the vertex set of a graph into the fewest sets consisting of vertices pairwise at distance greater than two from each other. Its applications include derivative computation in numerical optimization and channel assignment in radio networks. ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
The distance2 graph coloring problem aims at partitioning the vertex set of a graph into the fewest sets consisting of vertices pairwise at distance greater than two from each other. Its applications include derivative computation in numerical optimization and channel assignment in radio networks. We present efficient, distributedmemory, parallel heuristic algorithms for this NPhard problem as well as for two related problems used in the computation of Jacobians and Hessians. Parallel speedup is achieved through graph partitioning, speculative (iterative) coloring, and a BSPlike organization of parallel computation. Results from experiments conducted on a PC cluster employing up to 96 processors and using largesize realworld as well as synthetically generated test graphs show that the algorithms are scalable. In terms of quality of solution, the algorithms perform remarkably well—the number of colors used by the parallel algorithms was observed to be very close to the number used by the sequential counterparts, which in turn are quite often near optimal. Moreover, the experimental results show that the parallel distance2 coloring algorithm compares favorably with the alternative approach of solving the distance2 coloring problem on a graph G by first constructing the square graph G² and then applying a parallel distance1 coloring algorithm on G2. Implementations of the algorithms are made available via the Zoltan loadbalancing library.
Combinatorial problems in solving linear systems
, 2009
"... Numerical linear algebra and combinatorial optimization are vast subjects; as is their interaction. In virtually all cases there should be a notion of sparsity for a combinatorial problem to arise. Sparse matrices therefore form the basis of the interaction of these two seemingly disparate subjects ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Numerical linear algebra and combinatorial optimization are vast subjects; as is their interaction. In virtually all cases there should be a notion of sparsity for a combinatorial problem to arise. Sparse matrices therefore form the basis of the interaction of these two seemingly disparate subjects. As the core of many of today’s numerical linear algebra computations consists of the solution of sparse linear system by direct or iterative methods, we survey some combinatorial problems, ideas, and algorithms relating to these computations. On the direct methods side, we discuss issues such as matrix ordering; bipartite matching and matrix scaling for better pivoting; task assignment and scheduling for parallel multifrontal solvers. On the iterative method side, we discuss preconditioning techniques including incomplete factorization preconditioners, support graph preconditioners, and algebraic multigrid. In a separate part, we discuss the block triangular form of sparse matrices.
Using the Parallel Algebraic Recursive Multilevel Solver in Modern Physical Applications
, 2002
"... The recently developed Parallel Algebraic Recursive Multilevel Solver (pARMS) is the subject of this paper. We investigate its behavior in solving largescale sparse linear systems. In particular, we study the eect of a few parameters and dierent algorithms on the overall performance by conducting n ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
The recently developed Parallel Algebraic Recursive Multilevel Solver (pARMS) is the subject of this paper. We investigate its behavior in solving largescale sparse linear systems. In particular, we study the eect of a few parameters and dierent algorithms on the overall performance by conducting numerical experiments that stem from a number of realistic applications including magnetohydrodynamics, nonlinear acoustic eld simulation, and tire design.
Matrixfree preconditioning using partial matrix estimation
, 2004
"... We consider matrixfree solver environments where information about the underlying matrix is available only through matrix vector computations which do not have access to a fully assembled matrix. We introduce the notion of partial matrix estimation for constructing good algebraic preconditioners us ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We consider matrixfree solver environments where information about the underlying matrix is available only through matrix vector computations which do not have access to a fully assembled matrix. We introduce the notion of partial matrix estimation for constructing good algebraic preconditioners used in Krylov iterative methods in such matrixfree environments, and formulate three new graph coloring problems for partial matrix estimation. Numerical experiments utilizing one of these formulations demonstrate the viability of this approach.