Results 1  10
of
104
FETI and Neumann{Neumann iterative substructuring methods: connections and new results
 Comm. Pure Appl. Math
"... ..."
(Show Context)
Recycling Krylov Subspaces for Sequences of Linear Systems
 SIAM J. Sci. Comput
, 2004
"... Many problems in engineering and physics require the solution of a large sequence of linear systems. We can reduce the cost of solving subsequent systems in the sequence by recycling information from previous systems. We consider two dierent approaches. For several model problems, we demonstrate tha ..."
Abstract

Cited by 68 (6 self)
 Add to MetaCart
(Show Context)
Many problems in engineering and physics require the solution of a large sequence of linear systems. We can reduce the cost of solving subsequent systems in the sequence by recycling information from previous systems. We consider two dierent approaches. For several model problems, we demonstrate that we can reduce the iteration count required to solve a linear system by a factor of two. We consider both Hermitian and nonHermitian problems, and present numerical experiments to illustrate the eects of subspace recycling.
An Algebraic Theory for Primal and Dual Substructuring Methods by Constraints
, 2004
"... FETI and BDD are two widely used substructuring methods for the solution of large sparse systems of linear algebraic equations arizing from discretization of elliptic boundary value problems. The two most advanced variants of these methods are the FETIDP and the BDDC methods, whose formulation does ..."
Abstract

Cited by 61 (10 self)
 Add to MetaCart
FETI and BDD are two widely used substructuring methods for the solution of large sparse systems of linear algebraic equations arizing from discretization of elliptic boundary value problems. The two most advanced variants of these methods are the FETIDP and the BDDC methods, whose formulation does not require any information beyond the algebraic system of equations in a substructure form. We formulate the FETIDP and the BDDC methods in common framework as methods based on general constraints between the substructures, and provide a simplified algebraic convergence theory. The basic implementation blocks including transfer operators are common to both methods. It is shown that commonly used properties of the transfer operators in fact determine the operators uniquely. Identical algebraic condition number bounds for both methods are given in terms of a single inequality, and, under natural additional assumptions, it is proved that the eigenvalues of the preconditioned problems are the same. The algebraic bounds imply the usual polylogarithmic bounds for finite elements, independent of coefficient jumps between substructures. Computational experiments confirm the theory.
A NeumannNeumann Domain Decomposition Algorithm for Solving Plate and Shell Problems
 SIAM J. NUMER. ANAL
, 1997
"... We present a new NeumannNeumann type preconditioner of large scale linear systems arising from plate and shell problems. The advantage of the new method is a smaller coarse space than those of earlier method of the authors; this improves parallel scalability. A new abstract framework for NeumannNe ..."
Abstract

Cited by 51 (8 self)
 Add to MetaCart
(Show Context)
We present a new NeumannNeumann type preconditioner of large scale linear systems arising from plate and shell problems. The advantage of the new method is a smaller coarse space than those of earlier method of the authors; this improves parallel scalability. A new abstract framework for NeumannNeumann preconditioners is used to prove almost optimal convergence properties of the method. The convergence estimates are independent of the number of subdomains, coefficient jumps between subdomains, and depend only polylogarithmically on the number of elements per subdomain. We formulate and prove an approximate parametric variational principle for ReissnerMindlin elements as the plate thickness approaches zero, which makes the results applicable to a large class of nonlocking elements in everyday engineering use. The theoretical results are confirmed by computational experiments on model problems as well as examples from real world engineering practice.
Graph Partitioning Algorithms With Applications To Scientific Computing
 Parallel Numerical Algorithms
, 1997
"... Identifying the parallelism in a problem by partitioning its data and tasks among the processors of a parallel computer is a fundamental issue in parallel computing. This problem can be modeled as a graph partitioning problem in which the vertices of a graph are divided into a specified number of su ..."
Abstract

Cited by 49 (0 self)
 Add to MetaCart
Identifying the parallelism in a problem by partitioning its data and tasks among the processors of a parallel computer is a fundamental issue in parallel computing. This problem can be modeled as a graph partitioning problem in which the vertices of a graph are divided into a specified number of subsets such that few edges join two vertices in different subsets. Several new graph partitioning algorithms have been developed in the past few years, and we survey some of this activity. We describe the terminology associated with graph partitioning, the complexity of computing good separators, and graphs that have good separators. We then discuss early algorithms for graph partitioning, followed by three new algorithms based on geometric, algebraic, and multilevel ideas. The algebraic algorithm relies on an eigenvector of a Laplacian matrix associated with the graph to compute the partition. The algebraic algorithm is justified by formulating graph partitioning as a quadratic assignment p...
Analysis of projection methods for solving linear systems with multiple righthand sides
 SIAM J. Sci. Comput
, 1997
"... Abstract. We analyze a class of Krylov projection methods but mainly concentrate on a specific conjugate gradient (CG) implementation by Smith, Peterson, and Mittra [IEEE Transactions on Antennas and Propogation, 37 (1989), pp. 1490–1493] to solve the linear system AX = B, where A is symmetric posit ..."
Abstract

Cited by 41 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We analyze a class of Krylov projection methods but mainly concentrate on a specific conjugate gradient (CG) implementation by Smith, Peterson, and Mittra [IEEE Transactions on Antennas and Propogation, 37 (1989), pp. 1490–1493] to solve the linear system AX = B, where A is symmetric positive definite and B is a multiple of righthand sides. This method generates a Krylov subspace from a set of direction vectors obtained by solving one of the systems, called the seed system, by the CG method and then projects the residuals of other systems orthogonally onto the generated Krylov subspace to get the approximate solutions. The whole process is repeated with another unsolved system as a seed until all the systems are solved. We observe in practice a superconvergence behavior of the CG process of the seed system when compared with the usual CG process. We also observe that only a small number of restarts is required to solve all the systems if the righthand sides are close to each other. These two features together make the method particularly effective. In this paper, we give theoretical proof to justify these observations. Furthermore, we combine the advantages of this method and the block CG method and propose a block extension of this single seed method.
A comparison of deflation and the balancing preconditioner
 SIAM J. SCI. COMPUT
, 2006
"... In this paper we compare various preconditioners for the numerical solution of partial differential equations. We compare the wellknown balancing preconditioner used in domain decomposition methods with a socalled deflation preconditioner. We prove that the effective condition number of the defla ..."
Abstract

Cited by 29 (13 self)
 Add to MetaCart
(Show Context)
In this paper we compare various preconditioners for the numerical solution of partial differential equations. We compare the wellknown balancing preconditioner used in domain decomposition methods with a socalled deflation preconditioner. We prove that the effective condition number of the deflated preconditioned system is always, i.e., for all deflation vectors and all restrictions and prolongations, below the condition number of the system preconditioned by the balancing preconditioner. Even more, we establish that both preconditioners lead to almost the same spectra. The zero eigenvalues of the deflation preconditioned system are replaced by eigenvalues which are one if the balancing preconditioner is used. Moreover, we prove that the Anorm of the errors of the iterates built by the deflation preconditioner is always below the Anorm of the errors of the iterates built by the balancing preconditioner. Depending on the implementation of the balancing preconditioner the amount of work of one iteration of the deflation preconditioned system is less than or equal to the amount of work of one iteration of the balancing preconditioned system. If the amount of work is equal, both preconditioners are sensitive with respect to inexact computations. Finally, we establish that the deflation preconditioner and the balancing preconditioner produce the same iterates if one uses certain starting vectors. Numerical results for porous media flows emphasize the theoretical results.
Recycling Subspace Information for Diffuse Optical Tomography
 SIAM J. Sci. Comput
, 2004
"... We discuss the efficient solution of a large sequence of slowly varying linear systems arising in computations for diffuse optical tomographic imaging. In particular, we analyze a number of strategies for recycling Krylov subspace information for the most efficient solution. We reconstruct threedim ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
We discuss the efficient solution of a large sequence of slowly varying linear systems arising in computations for diffuse optical tomographic imaging. In particular, we analyze a number of strategies for recycling Krylov subspace information for the most efficient solution. We reconstruct threedimensional...
A preconditioner for the Schur complement domain decomposition method
 FOURTEENTH INTERNATIONAL CONFERENCE ON DOMAIN DECOMPOSITION METHODS
, 2003
"... ..."
Analysis Of Lagrange Multiplier Based Domain Decomposition
, 1998
"... The convergence of a substructuring iterative method with Lagrange multipliers known as Finite Element Tearing and Interconnecting (FETI) method is analyzed in this thesis. This method, originally proposed by Farhat and Roux, decomposes finite element discretization of an elliptic boundary value pro ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
(Show Context)
The convergence of a substructuring iterative method with Lagrange multipliers known as Finite Element Tearing and Interconnecting (FETI) method is analyzed in this thesis. This method, originally proposed by Farhat and Roux, decomposes finite element discretization of an elliptic boundary value problem into Neumann problems on the subdomains, plus a coarse problem for the subdomain null space components. For linear conforming elements and preconditioning by Dirichlet problems on the subdomains, the asymptotic bound on the condition number C(1 log(H=h)) fl , where fl = 2 or 3, is proved for a second order problem, h denoting the characteristic element size and H the size of subdomains. A similar method proposed by Park is shown to be equivalent to FETI with a special choice of some components and the bound C(1 log(H=h)) 2 on the condition number is established. Next, the original FETI method is generalized to fourth order plate bending problems. The main idea there is to enfor...