Results 1  10
of
15
Preconditioning techniques for large linear systems: A survey
 J. COMPUT. PHYS
, 2002
"... This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization i ..."
Abstract

Cited by 189 (5 self)
 Add to MetaCart
(Show Context)
This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization issues, and block and multilevel extensions. Some of the challenges ahead are also discussed. An extensive bibliography completes the paper.
A scalable parallel algorithm for incomplete factor preconditioning
 SIAM Journal on Scientific Computing
"... Abstract. We describe a parallel algorithm for computing incomplete factor (ILU) preconditioners. The algorithm attains a high degree of parallelism through graph partitioning and a twolevel ordering strategy. Both the subdomains and the nodes within each subdomain are ordered to preserve concurren ..."
Abstract

Cited by 37 (3 self)
 Add to MetaCart
Abstract. We describe a parallel algorithm for computing incomplete factor (ILU) preconditioners. The algorithm attains a high degree of parallelism through graph partitioning and a twolevel ordering strategy. Both the subdomains and the nodes within each subdomain are ordered to preserve concurrency. We show through an algorithmic analysis and through computational results that this algorithm is scalable. Experimental results include timings on three parallel platforms for problems with up to 20 million unknowns running on up to 216 processors. The resulting preconditioned Krylov solvers have the desirable property that the number of iterations required for convergence is insensitive to the number of processors.
Parallel AMG on Distributed Memory Computers
, 2000
"... Algebraic Multigrid (AMG) methods are well suited as preconditioners for iterative solvers of linear systems of equations which are sparse, symmetric positive definite and stem from a finite element (FE) discretization of a 2 nd order elliptic partial differential equation (PDE) or a system of PDE ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
Algebraic Multigrid (AMG) methods are well suited as preconditioners for iterative solvers of linear systems of equations which are sparse, symmetric positive definite and stem from a finite element (FE) discretization of a 2 nd order elliptic partial differential equation (PDE) or a system of PDEs. Since preconditioners based on AMG are very efficient, additional speedup can only be achieved by parallelization. In this paper we propose a general parallel AMG algorithm which is well suited for distributed memory computers. The algorithm is based on domain decomposition ideas and allows overlapping and nonoverlapping data decompositions. This paper pays special attention to the coarsening strategy which has to be adapted in the parallel case. Moreover, a general framework of data distribution gives rise to a construction scheme for the prolongation operators. Results of numerical studies on parallel machines with distributed memory are presented which show the high efficiency of the ...
Parallel Multigrid 3D Maxwell Solvers
, 1999
"... 3D magnetic field problems are challenging not only because of interesting applications in the industry but also from the mathematical point of view. In the magnetostatic case, our Maxwell solver is based on a regularized mixed variational formulation of the Maxwell equations in H 0 (curl) \Theta H ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
3D magnetic field problems are challenging not only because of interesting applications in the industry but also from the mathematical point of view. In the magnetostatic case, our Maxwell solver is based on a regularized mixed variational formulation of the Maxwell equations in H 0 (curl) \Theta H 1 0(\Omega\Gamma and their discretization by the N'ed'elec and Lagrange finite elements. Eliminating the Lagrange multiplier from the mixed finite element equations, we arrive at a symmetric and positive definite (spd) problem that can be solved by some parallel multigrid preconditioned conjugate gradient (pcg) method. More precisely, this pcg solver contains a standard scaled Laplace multigrid regularizator in the regularization part and a special multigrid preconditioner for the regularized N'ed'elec finite element equations that we want to solve. The parallelization of the pcg algorithm, the Laplace multigrid regularizator and the multigrid preconditioner are based on a unified domain d...
A Parallel AMG for Overlapping and NonOverlapping Domain Decomposition
 Trans. Numer. Anal
, 2000
"... There exist several approaches for the parallel solving of huge systems of linear equations resulting from the discretization of 2 nd order elliptic pdes. We distinguish between overlapping and nonoverlapping decompositions based on the distribution of finite elements. On the other hand, there ex ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
(Show Context)
There exist several approaches for the parallel solving of huge systems of linear equations resulting from the discretization of 2 nd order elliptic pdes. We distinguish between overlapping and nonoverlapping decompositions based on the distribution of finite elements. On the other hand, there exists a great demand on Algebraic Multigrid solvers (AMG) which have as input only matrix and right hand side or, as a substitute, the appropriate information per element. In this paper we propose a parallel AMG algorithm using overlapping or nonoverlapping data decompositions. 1 Introduction Without loss of generality we want to solve a second order pde with homogeneous Dirichlet boundary conditions in a domain\Omega ae R d , d = 2; 3 such that the weak formulation is represented by Find u 2 X(\Omega\Gamma : a(u; v) = hF; vi 8v 2 X(\Omega\Gamma (1) with bilinear form a(u; v) : X \Theta X 7! R and duality product hF; vi : X \Theta X 7! R . A discretization of the domain\Omega result...
Developments and Trends in the Parallel Solution of Linear Systems
, 1999
"... In this review paper, we consider some important developments and trends in algorithm design for the solution of linear systems concentrating on aspects that involve the exploitation of parallelism. We briefly discuss the solution of dense linear systems, before studying the solution of sparse equat ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
In this review paper, we consider some important developments and trends in algorithm design for the solution of linear systems concentrating on aspects that involve the exploitation of parallelism. We briefly discuss the solution of dense linear systems, before studying the solution of sparse equations by direct and iterative methods. We consider preconditioning techniques for iterative solvers and discuss some of the present research issues in this field.
Efficient and reliable iterative methods for linear systems
, 2002
"... The approximate solutions in standard iteration methods for linear systems Ax = b, with A an n by n nonsingular matrix, form a subspace.In this subspace, one may try to construct better approximations for the solution x. This is the idea behind Krylov subspace methods. It has led to very powerful an ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The approximate solutions in standard iteration methods for linear systems Ax = b, with A an n by n nonsingular matrix, form a subspace.In this subspace, one may try to construct better approximations for the solution x. This is the idea behind Krylov subspace methods. It has led to very powerful and efficient methods such as conjugate gradients, GMRES, and BiCGSTAB. We will give an overview of these methods and we will discuss some relevant properties from the user’s perspective view. The convergence of Krylov subspace methods depends strongly on the eigenvalue distribution of A, and on the angles between eigenvectors of A. Preconditioning is a popular technique to obtain a better behaved linear system. We will briefly discuss some modern developments in preconditioning, in particular parallel preconditioners will be highlighted: reordering techniques for incomplete decompositions, domain decomposition approaches, and sparsified Schur complements.
Spectral Analysis of Parallel Incomplete Factorizations With Implicit PseudoOverlap
, 2000
"... Introduction Linear systems from boundary value problems like the diffusion equation can be solved by iterative methods. The speed of convergence depends very much on global properties (a local correction affects the whole solution), whereas for parallelism one wants to split the problem into small ..."
Abstract
 Add to MetaCart
Introduction Linear systems from boundary value problems like the diffusion equation can be solved by iterative methods. The speed of convergence depends very much on global properties (a local correction affects the whole solution), whereas for parallelism one wants to split the problem into smaller (almost) independent subproblems. These two requirements are in conflict [13]. A critical topical question in the use of incomplete factorization based preconditionings on parallel environments is how to overcome the abovementioned tradeoff between high level parallelism and rate of convergence [13,14]. An answer to the above question requires to clearly identify why there is a tradeoff. To this end, Doi and Lichnewsky [8,9] relate this phenomenon to the number of incompatible nodes (any node i which is connected to at least two nodes j and k along the same direction (axis), such that j<F12.
37. Parallel 3D Maxwell Solvers based on Domain Decomposition Data Distribution
"... The most efficient solvers for finite element (fe) equations are certainly multigrid, or multilevel methods, and domain decomposition methods using local multigrid solvers. Typically, the multigrid convergence rate is independent of the mesh size parameter, and the arithmetical complexity grows line ..."
Abstract
 Add to MetaCart
The most efficient solvers for finite element (fe) equations are certainly multigrid, or multilevel methods, and domain decomposition methods using local multigrid solvers. Typically, the multigrid convergence rate is independent of the mesh size parameter, and the arithmetical complexity grows linearly with the number of unknowns. However,
Implementation Aspects
"... e inner products, vector updates and matrix vector product are easily parallelized and vectorized. The more successful preconditionings, i.e, based upon incomplete LU decomposition, are not easily parallelizable. For that reason one is often satisfied with the use of only diagonal scaling as a preco ..."
Abstract
 Add to MetaCart
e inner products, vector updates and matrix vector product are easily parallelized and vectorized. The more successful preconditionings, i.e, based upon incomplete LU decomposition, are not easily parallelizable. For that reason one is often satisfied with the use of only diagonal scaling as a preconditioner on highly parallel computers, such as the CM2 [24]. On distributed memory computers we need large grained parallelism in order to reduce synchronization overhead. This can be achieved by combining the work required for a successive number of iteration steps. The idea is to construct first in parallel a straight forward Krylov basis for the search subspace in which an update for the current solution will be determined. Once this basis has been computed, the vectors are orthogonalized, as is done in Krylov subspace methods. The construction as well as the orthogonalization can be done with large grained parallelism, and has su#cient degree of parallelism in it. This approach has be