Results 1  10
of
17
An optimal adaptive wavelet method without coarsening of the iterands
 Math. Comp
, 2005
"... Abstract. In this paper, an adaptive wavelet method for solving linear operator equations is constructed that is a modification of the method from [Math. Comp, 70 (2001), pp.27–75] by Cohen, Dahmen and DeVore, in the sense that there is no recurrent coarsening of the iterands. Despite of this, it wi ..."
Abstract

Cited by 18 (8 self)
 Add to MetaCart
Abstract. In this paper, an adaptive wavelet method for solving linear operator equations is constructed that is a modification of the method from [Math. Comp, 70 (2001), pp.27–75] by Cohen, Dahmen and DeVore, in the sense that there is no recurrent coarsening of the iterands. Despite of this, it will be shown that the method has optimal computational complexity. Numerical results for a simple model problem indicate that the new method is more efficient than an existing alternative adaptive wavelet method. 1. Preliminaries For some boundedly invertible linear operator A: H → H ′ , where H is some separable Hilbert space with dual H ′ , and some f ∈ H ′ , we consider the problem of finding u ∈ H such that Au = f. As typical examples, we think of linear differential or integral equations of some order 2t in variational form. Furthermore, although systems of such equations also fit into the framework, usually we think of scalar equations. So typically H is a Sobolev space H t, possibly incorporating essential boundary conditions, on an ndimensional underlying domain or manifold. We assume that we have a Riesz basis Ψ = {ψλ: λ ∈ ∇} for H t available, where ∇ is some
Relaxation strategies for nested Krylov methods
 Journal of Computational and Applied Mathematics
, 2003
"... There are classes of linear problems for which the matrixvector product is a time consuming operation because an expensive approximation method is required to compute it to a given accuracy. In recent years di#erent authors have investigated the use of, what is called, relaxation strategies for ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
There are classes of linear problems for which the matrixvector product is a time consuming operation because an expensive approximation method is required to compute it to a given accuracy. In recent years di#erent authors have investigated the use of, what is called, relaxation strategies for various Krylov subspace methods. These relaxation strategies aim to minimize the amount of work that is spent in the computation of the matrixvector product without compromising the accuracy of the method or the convergence speed too much. In order to achieve this goal, the accuracy of the matrixvector product is decreased when the iterative process comes closer to the solution. In this paper we show that a further significant reduction in computing time can be obtained by combining a relaxation strategy with the nesting of inexact Krylov methods. Flexible Krylov subspace methods allow variable preconditioning and therefore can be used in the outer most loop of our overall method. We analyze for several flexible Krylov methods strategies for controlling the accuracy of both the inexact matrixvector products and of the inner iterations. The results of our analysis will be illustrated with an example that models global ocean circulation.
Faulttolerant iterative methods via selective reliability
, 2011
"... Current iterative methods for solving linear equations assume reliability of data (no “bit flips”) and arithmetic (correct up to rounding error). If faults occur, the solver usually either aborts, or computes the wrong answer without indication. System reliability guarantees consume energy or reduce ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Current iterative methods for solving linear equations assume reliability of data (no “bit flips”) and arithmetic (correct up to rounding error). If faults occur, the solver usually either aborts, or computes the wrong answer without indication. System reliability guarantees consume energy or reduces performance. As processor counts continue to grow, these costs will become unbearable. Instead, we show that if the system lets applications apply reliability selectively, we can develop iterations that compute the right answer despite faults. These “faulttolerant ” methods either converge eventually, at a rate that degrades gracefully with increased fault rate, or return a clear failure indication in the rare case that they cannot converge. If faults are infrequent, these algorithms spend most of their time in unreliable mode. This can save energy, improve performance, and avoid restarting from checkpoints. We illustrate convergence for a sample algorithm, FaultTolerant GMRES, for representative test problems and fault rates.
Convergence in backward error of relaxed GMRES
 SIAM J
"... Abstract. This work is the followup of the experimental study presented in [3]. It is based on and extends some theoretical results in [15, 18]. In a backward error framework we study the convergence of GMRES when the matrixvector products are performed inaccurately. This inaccuracy is modeled by ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Abstract. This work is the followup of the experimental study presented in [3]. It is based on and extends some theoretical results in [15, 18]. In a backward error framework we study the convergence of GMRES when the matrixvector products are performed inaccurately. This inaccuracy is modeled by a perturbation of the original matrix. We prove the convergence of GMRES when the perturbation size is proportional to the inverse of the computed residual norm; this implies that the accuracy can be relaxed as the method proceeds which gives rise to the terminology relaxed GMRES. As for the exact GMRES we show under proper assumptions that only happy breakdowns can occur. Furthermore the convergence can be detected using a byproduct of the algorithm. We explore the links between relaxed rightpreconditioned GMRES and flexible GMRES. In particular this enables us to derive a proof of convergence of FGMRES. Finally we report results on numerical experiments to illustrate the behaviour of the relaxed GMRES monitored by the proposed relaxation strategies.
A FETIlike domain decomposition method for coupling finite elements and boundary elements in largesize scattering problems of acoustic scattering
 COMPUT. & STRUCTURES
, 2005
"... Numerical simulations of acoustic scattering in the frequency domain based on hybrid methods coupling finite elements and boundary elements are the most suited for dealing with problems involving wave propagation in inhomogeneous media. Furthermore, it is necessary to resort to high performance comp ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Numerical simulations of acoustic scattering in the frequency domain based on hybrid methods coupling finite elements and boundary elements are the most suited for dealing with problems involving wave propagation in inhomogeneous media. Furthermore, it is necessary to resort to high performance computing to effectively solve the large size problems. However, the direct coupling yields a linear system with a matrix which is partly dense and partly sparse and thus not adapted to high performance computing. To avoid this difficulty, we present a new iterative method constructed from a non overlapping domain decomposition technique.
On the occurrence of superlinear convergence of exact and inexact Krylov subspace methods
 SIAM Review
, 2003
"... Abstract. Krylov subspace methods often exhibit superlinear convergence. We present a general analytic model which describes this superlinear convergence, when it occurs. We take an invariant subspace approach, so that our results apply also to inexact methods, and to nondiagonalizable matrices. Thu ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. Krylov subspace methods often exhibit superlinear convergence. We present a general analytic model which describes this superlinear convergence, when it occurs. We take an invariant subspace approach, so that our results apply also to inexact methods, and to nondiagonalizable matrices. Thus, we provide a unified treatment of the superlinear convergence of GMRES, conjugate gradients, block versions of these, and inexact subspace methods. Numerical experiments illustrate the bounds obtained.
Relaxed Krylov subspace approximation
 in L (R), in Special Functions and Differential Equations, Proceedings
"... Recent computational and theoretical studies have shown that the matrixvector product occurring at each step of a Krylov subspace method can be relaxed as the iterations proceed, i.e., it can be computed in a less exact manner, without degradation of the overall performance. In the present paper a ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Recent computational and theoretical studies have shown that the matrixvector product occurring at each step of a Krylov subspace method can be relaxed as the iterations proceed, i.e., it can be computed in a less exact manner, without degradation of the overall performance. In the present paper a general operator treatment of this phenomenon is provided and a new result further explaining its behavior is presented. 1
A note on relaxed and flexible GMRES
, 2004
"... We consider the solution of a linear system of equations using the GMRES iterative method. In [3], a strategy to relax the accuracy of the matrixvector product is proposed for general systems and illustrated on a large set of numerical experiments. This work is based on some heuristic consideration ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We consider the solution of a linear system of equations using the GMRES iterative method. In [3], a strategy to relax the accuracy of the matrixvector product is proposed for general systems and illustrated on a large set of numerical experiments. This work is based on some heuristic considerations and proposes a strategy that often enables a convergence of the GMRES iterates xk within a relative normwise backward error less than a target accuracy. Finally a significant step toward a theoretical explanation of the observed behaviour of the relaxed GMRES is proposed in [16, 17]. In these works, important justifications are brought to the fact that a relaxation of the matrixvector product proportional to the inverse of the norm of the residual may enable the convergence of the relaxed GMRES. In this paper we extend these works, we establish a computable relaxation strategy that enables to reach the aims of [3] using the tools presented in [16, 17]. We investigate the compliance of our strategy with the scaling invariance properties of GMRES. We extend the study to the inexact preconditioning situation and explore relationships with Flexible GMRES. We report results on intensive numerical experiments to illustrate the behaviours of the relaxed GMRES monitored by the proposed relaxation strategy. Finally in the case of the Householder relaxed GMRES we establish a backward stability result by extending the results of [5]. 1
FAST INEXACT IMPLICITLY RESTARTED ARNOLDI METHOD FOR GENERALIZED EIGENVALUE PROBLEMS WITH SPECTRAL TRANSFORMATION ∗
, 2010
"... Abstract. We study an inexact implicitly restarted Arnoldi (IRA) method for computing a few eigenpairs of generalized nonHermitian eigenvalue problems with spectral transformation, where in each Arnoldi step (outer iteration) the matrixvector product involving the transformed operator is performed ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. We study an inexact implicitly restarted Arnoldi (IRA) method for computing a few eigenpairs of generalized nonHermitian eigenvalue problems with spectral transformation, where in each Arnoldi step (outer iteration) the matrixvector product involving the transformed operator is performed by iterative solution (inner iteration) of the corresponding linear system of equations. We provide new perspectives and analysis of two major strategies that help reduce the inner iteration cost: a special type of preconditioner with “tuning”, and gradually relaxed tolerances for the solution of the linear systems. We study a new tuning strategy constructed from vectors in both previous and the current IRA cycles, and we show how tuning is used in a new twophase algorithm to greatly reduce inner iteration counts. We give an upper bound of the allowable tolerances of the linear systems and propose an alternative estimate of the tolerances. In addition, the inner iteration cost can be further reduced through the use of subspace recycling with iterative linear solvers. The effectiveness of these strategies is demonstrated by numerical experiments.