Results 1  10
of
43
Numerical solution of saddle point problems
 ACTA NUMERICA
, 2005
"... Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has b ..."
Abstract

Cited by 180 (30 self)
 Add to MetaCart
Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for solving this type of systems. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems.
Preconditioning techniques for large linear systems: A survey
 J. COMPUT. PHYS
, 2002
"... This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization i ..."
Abstract

Cited by 105 (5 self)
 Add to MetaCart
This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization issues, and block and multilevel extensions. Some of the challenges ahead are also discussed. An extensive bibliography completes the paper.
Constraint Preconditioning for Indefinite Linear Systems
 SIAM J. Matrix Anal. Appl
, 2000
"... . The problem of nding good preconditioners for the numerical solution of indenite linear systems is considered. Special emphasis is put on preconditioners that have a 2 2 block structure and which incorporate the (1; 2) and (2; 1) blocks of the original matrix. Results concerning the spectrum and ..."
Abstract

Cited by 73 (10 self)
 Add to MetaCart
. The problem of nding good preconditioners for the numerical solution of indenite linear systems is considered. Special emphasis is put on preconditioners that have a 2 2 block structure and which incorporate the (1; 2) and (2; 1) blocks of the original matrix. Results concerning the spectrum and form of the eigenvectors of the preconditioned matrix and its minimum polynomial are given. The consequences of these results are considered for a variety of Krylov subspace methods. Numerical experiments validate these conclusions. Key words. preconditioning, indenite matrices, Krylov subspace methods AMS subject classications. 65F10, 65F15, 65F50 1. Introduction. In this paper, we are concerned with investigating a new class of preconditioners for indenite systems of linear equations of a sort which arise in constrained optimization as well as in leastsquares, saddlepoint and Stokes problems. We attempt to solve the indenite linear system A B T B 0  {z } A x 1 x...
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 48 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
Efficient Schemes for Nearest Neighbor Load Balancing
, 1998
"... We design a general mathematical framework to analyze the properties of nearest neighbor balancing algorithms of the diffusion type. Within this framework we develop a new optimal polynomial scheme (OPS) which we show to terminate within a finite number m of steps, where m only depends on the graph ..."
Abstract

Cited by 46 (13 self)
 Add to MetaCart
We design a general mathematical framework to analyze the properties of nearest neighbor balancing algorithms of the diffusion type. Within this framework we develop a new optimal polynomial scheme (OPS) which we show to terminate within a finite number m of steps, where m only depends on the graph and not on the initial load distribution. We show that all existing diffusion load balancing algorithms, including OPS, determine a flow of load on the edges of the graph which is uniquely defined, independent of the method and minimal in the l 2 norm. This result can be extended to edge weighted graphs. The l 2 minimality is achieved only if a diffusion algorithm is used as preprocessing and the real movement of load is performed in a second step. Thus, it is advisable to split the balancing process into the two steps of first determining a balancing flow and afterwards moving the load. We introduce the problem of scheduling a flow and present some first results on its complexity and the ...
On The Choice Of Subspace For Iterative Methods For Linear Discrete IllPosed Problems
 Int. J. Appl. Math. Comput. Sci
, 2001
"... . Many iterative methods for the solution of linear discrete illposed problems with a large matrix require the computed approximate solutions to be orthogonal to the null space of the matrix. We show that it may be possible to determine a meaningful approximate solution with less computational work ..."
Abstract

Cited by 20 (14 self)
 Add to MetaCart
. Many iterative methods for the solution of linear discrete illposed problems with a large matrix require the computed approximate solutions to be orthogonal to the null space of the matrix. We show that it may be possible to determine a meaningful approximate solution with less computational work when this requirement is not imposed. Key words. Minimal residual method, conjugate gradient method, linear illposed problems. 1. Introduction. This paper is concerned with the design of iterative methods for the computation of approximate solutions of linear systems of equations Ax = b, A # R mn , x # R n , b # R m , (1.1) with a large matrix A of illdetermined rank. Thus, A has many "tiny" singular values of di#erent orders of magnitude. In particular, A is severely illconditioned. Some of the singular values of A may be vanishing. We allow m # n or m < n. The righthand side vector b is not required to be in the range of A. Linear systems of equations of the fo...
Differences in the effects of rounding errors in Krylov solvers for symmetric indefinite linear systems
, 1999
"... The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. Thi ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. This leads to wellknown methods: MINRES, GMRES, and SYMMLQ. We will discuss in what way and to what extent these approaches differ in their sensitivity to rounding errors. In our analysis we will assume that the Lanczos basis is generated in exactly the same way for the different methods, and we will not consider the errors in the Lanczos process itself. We will show that the method of solution may lead, under certain circumstances, to large additional errors, that are not corrected by continuing the iteration process. Our findings are supported and illustrated by numerical examples. 1 Introduction We will consider iterative methods for the construction of approximate solutions, starting with...
The Convergence Of Iterative Solution Methods For Symmetric And Indefinite Linear Systems
, 1997
"... this paper we concentrate on convergence estimates for miminum residual iteration applied to a linear system Ax = b, (so that A represents the preconditioned coefficient matrix if preconditioning is employed). In particular we generalise the results of [25] to establish rigorous convergence estimate ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
this paper we concentrate on convergence estimates for miminum residual iteration applied to a linear system Ax = b, (so that A represents the preconditioned coefficient matrix if preconditioning is employed). In particular we generalise the results of [25] to establish rigorous convergence estimates for families of matrices which depend on an asymptotically small parameter ff (in applications ff is typically a positive power of the mesh size parameter h). These results prove the superiority of the minimum residual approach over the solution of normal equations for all except one very special type of symmetric and indefinite matrix. More background and an easy introduction to this problem can be found in [8], pp. 310315
Approximate factorization constraint preconditioners for saddlepoint matrices
 SIAM J. Sci. Comput
"... Abstract. We consider the application of the conjugate gradient method to the solution of large, symmetric indefinite linear systems. Special emphasis is put on the use of constraint preconditioners and a new factorization that can reduce the number of flops required by the preconditioning step. Res ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Abstract. We consider the application of the conjugate gradient method to the solution of large, symmetric indefinite linear systems. Special emphasis is put on the use of constraint preconditioners and a new factorization that can reduce the number of flops required by the preconditioning step. Results concerning the eigenvalues of the preconditioned matrix and its minimum polynomial are given. Numerical experiments validate these conclusions.
Wavelets based on orthogonal polynomials
 MATH. COMP
, 1997
"... We present a unified approach for the construction of polynomial wavelets. Our main tool are orthogonal polynomials. With the help of their properties we devise schemes for the construction of time localized polynomial bases on bounded and unbounded subsets of the real line. Several examples illustr ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
We present a unified approach for the construction of polynomial wavelets. Our main tool are orthogonal polynomials. With the help of their properties we devise schemes for the construction of time localized polynomial bases on bounded and unbounded subsets of the real line. Several examples illustrate the new approach.