Results 11  20
of
23
Numerical Stability Of GMRES
, 1995
"... . The Generalized Minimal Residual Method (GMRES) is one of the significant methods for solving linear algebraic systems with nonsymmetric matrices. It minimizes the norm of the residual on the linear variety determined by the initial residual and the nth Krylov residual subspace and is therefore ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
. The Generalized Minimal Residual Method (GMRES) is one of the significant methods for solving linear algebraic systems with nonsymmetric matrices. It minimizes the norm of the residual on the linear variety determined by the initial residual and the nth Krylov residual subspace and is therefore optimal, with respect to the size of the residual, in the class of Krylov subspace methods. One possible way of computing the GMRES approximations is based on constructing the orthonormal basis of the Krylov subspaces (Arnoldi basis) and then solving the transformed least squares problem. This paper studies the numerical stability of such formulations of GMRES. Our approach is based on the Arnoldi recurrence for the actually, i.e. in finite precision arithmetic, computed quantities. We consider the Householder (HHA), iterated modified GramSchmidt (IMGSA), and iterated classical GramSchmidt (ICGSA) implementations. Under the obvious assumption on the numerical nonsingularity of the system m...
Numerical Stability Of The GMRES Method
"... The Generalized minimal residual method (GMRES) is known as an efficient iterative method for solving large nonsymmetric systems of linear equations. In this thesis, we study numerical stability of the GMRES method. For the construction of the Arnoldi basis, we consider the Householder orthogonaliza ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
The Generalized minimal residual method (GMRES) is known as an efficient iterative method for solving large nonsymmetric systems of linear equations. In this thesis, we study numerical stability of the GMRES method. For the construction of the Arnoldi basis, we consider the Householder orthogonalization and the frequently used modified GramSchmidt process. While for the more expensive Householder implementation the orthogonality of the computed basis is preserved close to the machine precision level, for the modified GramSchmidt Arnoldi process the computed vectors gradually lose their orthogonality. Using the bound on the loss of orthogonality, it is proved that, under certain assumptions on the numerical nonsingularity of the system matrix, the GMRES implementation based on the Householder orthogonalization is backward stable. It produces an approximate solution with the residual which is of the same order as that one obtained from the direct solving of the system Ax = b by the Ho...
Towards a fixed point QP solver for predictive control
 In Proc. IEEE Conf. on Decision and Control (Submitted
, 2012
"... Abstract — There is a need for high speed, low cost and low energy solutions for convex quadratic programming to enable model predictive control (MPC) to be implemented in a wider set of applications than is currently possible. For most quadratic programming (QP) solvers the computational bottleneck ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract — There is a need for high speed, low cost and low energy solutions for convex quadratic programming to enable model predictive control (MPC) to be implemented in a wider set of applications than is currently possible. For most quadratic programming (QP) solvers the computational bottleneck is the solution of systems of linear equations, which we propose to solve using a fixedpoint implementation of an iterative linear solver to allow for fast and efficient computation in parallel hardware. However, fixed point arithmetic presents additional challenges, such as having to bound peak values of variables and constrain their dynamic ranges. For these types of algorithms the problems cannot be automated by current tools. We employ a preconditioner in a novel manner to allow us to establish tight analytical bounds on all the variables of the Lanczos process, the heart of modern iterative linear solving algorithms. The proposed approach is evaluated through the implementation of a mixed precision interiorpoint controller for a Boeing 747 aircraft. The numerical results show that there does not have to be a loss of control quality by moving from floatingpoint to fixedpoint. I.
The Main Effects of Rounding Errors in Krylov Solvers for Symmetric Linear Systems
, 1997
"... The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of linear systems, by solving the reduced system in one way or another. This leads to wellknown ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of linear systems, by solving the reduced system in one way or another. This leads to wellknown methods: MINRES (GMRES), CG, CR, and SYMMLQ. We will discuss in what way and to what extent the various approaches are sensitive to rounding errors. In our analysis we will assume that the Lanczos basis is generated in exactly the same way for the different methods (except CR), and we will not consider the errors in the Lanczos process itself. These errors may lead to large perturbations with respect to the exact process, but convergence takes still place. Our attention is focussed to what happens in the solution phase. We will show that the way of solution may lead, under circumstances, to large additional errors, that are not corrected by continuing the iteration process. Our findings are...
Maintaining convergence properties of BiCGstab methods in finite precision arithmetic
, 1994
"... It is wellknown that BiCG can be adapted so that hybrid methods with computational complexity almost similar to BiCG can be constructed, in which it is attempted to further improve the convergence behavior. In this paper we will study the class of BiCGstab methods. In many applications, the spe ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
It is wellknown that BiCG can be adapted so that hybrid methods with computational complexity almost similar to BiCG can be constructed, in which it is attempted to further improve the convergence behavior. In this paper we will study the class of BiCGstab methods. In many applications, the speed of convergence of these methods appears to be determined mainly by the incorporated BiCG process, and the problem is that the BiCG iteration coefficients have to be determined from the BiCGstab process. We will focus our attention to the accuracy of these BiCG coefficients, and how rounding errors may affect the speed of convergence of the BiCGstab methods. We will propose a strategy for a more stable determination of the BiCG iteration coefficients and by experiments we will show that this indeed may lead to faster convergence.
Efficient Simulation of Coupled CircuitField Problems: Generalized Falk Method
"... Abstract—In this paper, we present an efficient method to solve the coupled circuitfield problem, by first transforming the partial differential equations (PDEs) governing the field problem into a simple one–dimensional (1D) equivalent circuit system, which is then combined with the circuit part o ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract—In this paper, we present an efficient method to solve the coupled circuitfield problem, by first transforming the partial differential equations (PDEs) governing the field problem into a simple one–dimensional (1D) equivalent circuit system, which is then combined with the circuit part of the overall coupled problem. This transformation relies on the generalized Falk algorithm, which transforms the coordinates in any complex system of linear firstorder ordinary differential equations (ODEs) or secondorder undamped ODEs, resulting from the discretization of field PDEs, into guaranteed stableandpassive 1D equivalent circuit system. The generalized Falk algorithm, having a faster transformation time compared with the traditional Lanczostype methods, transforms a general finiteelement system represented by possibly a system of full matrices—capacitance and conductance matrices in heat problems, or mass and stiffness matrices in structural dynamics and electromagnetics—into an identity capacitance (mass) matrix and a tridiagonal conductance (stiffness) matrix. We also discuss issues related to the stability and the loss of orthogonality of the proposed algorithm. In circuit simulation, the generalized Falk algorithm does not produce unstable positive poles, and is thus more stable than the widely used Lanczostype methods. The stability and passivity of the resulting 1D equivalent circuit network are guaranteed since all transformed matrices remain positive definite. The resulting 1D equivalent circuit system contains only resistors, capacitors, inductors, and current sources. The generalized Falk algorithm offers an extremely simple and convenient way to incorporate field problems into circuit simulators to efficiently solve coupled circuitfield problems. Numerical examples show a significant reduction of simulation time compared to the solution without using the proposed transformation.
Krylov Subspace Methods for Large Linear Systems of Equations
, 1993
"... When solving PDE's by means of numerical methods one often has to deal with large systems of linear equations, specifically if the PDE is timeindependent or if the timeintegrator is implicit. For real life problems, these large systems can often only be solved by means of some iterative method. Ev ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
When solving PDE's by means of numerical methods one often has to deal with large systems of linear equations, specifically if the PDE is timeindependent or if the timeintegrator is implicit. For real life problems, these large systems can often only be solved by means of some iterative method. Even if the systems are preconditioned, the basic iterative method often converges slowly or even diverges. We discuss and classify algebraic techniques to accelerate the basic iterative method. Our discussion includes methods like CG, GCR, ORTHODIR, GMRES, CGNR, BiCG and their modifications like GMRESR, CGS, BiCGSTAB. We place them in a frame, discuss their convergence behavior and their advantages and drawbacks. 1 Introduction Our aim is to compute acceptable approximations for the solution x of the equation Ax = b; (1) where A and b are given, A is a nonsingular n \Theta nmatrix, A is sparse, n is large and b an nvector. We will assume A and b to be real, but our methods are easily ...
Reliable updated residuals in hybrid BiCG methods
, 1994
"... . Many iterative methods for solving linear equations Ax = b aim for accurate approximations to x, and they do so by updating residuals iteratively. In finite precision arithmetic, these computed residuals may be inaccurate, that is, they may differ significantly from the (true) residuals that corre ..."
Abstract
 Add to MetaCart
. Many iterative methods for solving linear equations Ax = b aim for accurate approximations to x, and they do so by updating residuals iteratively. In finite precision arithmetic, these computed residuals may be inaccurate, that is, they may differ significantly from the (true) residuals that correspond to the computed approximations. In this paper we will propose variants on Neumaier's strategy, originally proposed for CGS, and explain its success. In particular, we will propose a more restrictive strategy for accumulating groups of updates for updating the residual and the approximation, and we will show that this may improve the accuracy significantly, while maintaining speed of convergence. This approach avoids restarts and allows for more reliable stopping criteria. We will discuss updating conditions and strategies that are efficient, lead to accurate residuals, and are easy to implement. For CGS and BiCG these strategies are particularly attractive, but they may also be used t...
Tracking a Few Extreme Singular Values and Vectors in Signal Processing
"... In various applications, it is necessary to keep track of a lowrank approximation of a covariance matrix, R(t), slowly varying with time. It is convenient to track the left singular vectors associated with the largest singular values of the triangular factor, L(t), of its Cholesky factorization. Th ..."
Abstract
 Add to MetaCart
In various applications, it is necessary to keep track of a lowrank approximation of a covariance matrix, R(t), slowly varying with time. It is convenient to track the left singular vectors associated with the largest singular values of the triangular factor, L(t), of its Cholesky factorization. These algorithms are referred to as “squareroot.” The drawback of the Eigenvalue Decomposition (€VD) or the Singular Value Decomposition (SVD) is usually the volume of the computations. Various numerical methods carrying out this task are surveyed in this paper, and we show why this admittedly heavy computational burden is questionable in numerous situations and should be revised. Indeed, the complexity per eigenpair is generally a quadratic function of the problem size, but there exist faster algorithms whose complexity is linear. Finally, in order to make a choice among the large and fuzzy set of available techniques, comparisons are made based on computer simulations in a relevant signal processing context. I.