Results 1  10
of
53
ReducedOrder Modeling Techniques Based on Krylov Subspaces and Their Use in Circuit Simulation
 Applied and Computational Control, Signals, and Circuits
, 1998
"... In recent years, reducedorder modeling techniques based on Krylovsubspace iterations, especially the Lanczos algorithm and the Arnoldi process, have become popular tools to tackle the largescale timeinvariant linear dynamical systems that arise in the simulation of electronic circuits. This pape ..."
Abstract

Cited by 53 (10 self)
 Add to MetaCart
In recent years, reducedorder modeling techniques based on Krylovsubspace iterations, especially the Lanczos algorithm and the Arnoldi process, have become popular tools to tackle the largescale timeinvariant linear dynamical systems that arise in the simulation of electronic circuits. This paper reviews the main ideas of reducedorder modeling techniques based on Krylov subspaces and describes the use of reducedorder modeling in circuit simulation. 1 Introduction Krylovsubspace methods, most notably the Lanczos algorithm [81, 82] and the Arnoldi process [5], have long been recognized as powerful tools for largescale matrix computations. Matrices that occur in largescale computations usually have some special structures that allow to compute matrixvector products with such a matrix (or its transpose) much more efficiently than for a dense, unstructured matrix. The most common structure is sparsity, i.e., only few of the matrix entries are nonzero. Computing a matrixvector pr...
Krylov Subspace Techniques for ReducedOrder Modeling of Nonlinear Dynamical Systems
 Appl. Numer. Math
, 2002
"... Means of applying Krylov subspace techniques for adaptively extracting accurate reducedorder models of largescale nonlinear dynamical systems is a relatively open problem. There has been much current interest in developing such techniques. We focus on a bilinearization method, which extends Kry ..."
Abstract

Cited by 50 (3 self)
 Add to MetaCart
Means of applying Krylov subspace techniques for adaptively extracting accurate reducedorder models of largescale nonlinear dynamical systems is a relatively open problem. There has been much current interest in developing such techniques. We focus on a bilinearization method, which extends Krylov subspace techniques for linear systems. In this approach, the nonlinear system is first approximated by a bilinear system through Carleman bilinearization. Then a reducedorder bilinear system is constructed in such a way that it matches certain number of multimoments corresponding to the first few kernels of the VolterraWiener representation of the bilinear system. It is shown that the twosided Krylov subspace technique matches significant more number of multimoments than the corresponding oneside technique.
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 48 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
Solution of Shifted Linear Systems by QuasiMinimal Residual Iterations
 in Numerical Linear Algebra
, 1993
"... Highorder implicit methods for solving timedependent partial differential equations and frequency response computations in control theory give rise to shifted systems of linear equations. Such systems have identical righthand sides, and their coefficient matrices differ from each other only by sc ..."
Abstract

Cited by 41 (4 self)
 Add to MetaCart
Highorder implicit methods for solving timedependent partial differential equations and frequency response computations in control theory give rise to shifted systems of linear equations. Such systems have identical righthand sides, and their coefficient matrices differ from each other only by scalar multiples of the identity matrix. This paper explores the use of two quasiminimal residual iterations, the QMR and the TFQMR algorithm, for the solution of such shifted linear systems. It is shown that both algorithms can exploit the special structure, and that, for any family of shifted linear systems, the number of matrixvector products and the number of inner products is the same as for a single linear system. Convergence results for the QMR and TFQMR algorithms are presented. This research was performed at the Research Institute for Advanced Computer Science (RIACS), NASA Ames Research Center, Moffett Field, California 94035, and it was supported by Cooperative Agreement NCC 238...
A QMRbased interiorpoint algorithm for solving linear programs
 Math. Programming
, 1994
"... A new approach for the implementation of interiorpoint methods for solving linear programs is proposed. Its main feature is the iterative solution of the symmetric, but highly indefinite 2\Theta2block systems of linear equations that arise within the interiorpoint algorithm. These linear systems ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
A new approach for the implementation of interiorpoint methods for solving linear programs is proposed. Its main feature is the iterative solution of the symmetric, but highly indefinite 2\Theta2block systems of linear equations that arise within the interiorpoint algorithm. These linear systems are solved by a symmetric variant of the quasiminimal residual (QMR) algorithm, which is an iterative solver for general linear systems. The symmetric QMR algorithm can be combined with indefinite preconditioners, which is crucial for the efficient solution of highly indefinite linear systems, yet it still fully exploits the symmetry of the linear systems to be solved. To support the use of the symmetric QMR iteration, a novel stable reduction of the original unsymmetric 3 \Theta 3block systems to symmetric 2 \Theta 2block systems is introduced, and a measure for a low relative accuracy for the solution of these linear systems within the interiorpoint algorithm is proposed. Some indefini...
Software for simplified Lanczos and QMR algorithms
 Appl. Numer. Math
, 1995
"... The nonsymmetric Lanczos process simplifies when applied to Jsymmetric and JHermitian matrices, and work and storage requirements are roughly halved compared to the general case. In this paper, we describe FORTRAN77 implementations of simplified versions of the lookahead Lanczos algorithm and o ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
The nonsymmetric Lanczos process simplifies when applied to Jsymmetric and JHermitian matrices, and work and storage requirements are roughly halved compared to the general case. In this paper, we describe FORTRAN77 implementations of simplified versions of the lookahead Lanczos algorithm and of the quasiminimal residual (QMR) method, which is a Lanczosbased iterative procedure for the solution of linear systems. These implementations of the simplified algorithms complete our software package QMRPACK, which so far contained only codes for Lanczos and QMR algorithms for general matrices. We describe in some detail the use of two routines, one for the solution of linear systems, and the other for eigenvalue computations. We present examples that lead to Jsymmetric and JHermitian matrices. Results of numerical experiments are reported. Keywords. Lanczos process; quasiminimal residual iteration; linear system; eigenvalue computation; Jsymmetric matrix; JHermitian matrix; look...
QMRPACK: a Package of QMR Algorithms
, 1996
"... this paper, we discuss some of the features of the algorithms in the package, with emphasis on the issues related to using the codes. We describe in some detail two routines from the package, one for the solution of linear systems, and the other for the computation of eigenvalue approximations. We p ..."
Abstract

Cited by 35 (4 self)
 Add to MetaCart
this paper, we discuss some of the features of the algorithms in the package, with emphasis on the issues related to using the codes. We describe in some detail two routines from the package, one for the solution of linear systems, and the other for the computation of eigenvalue approximations. We present some numerical examples from applications where QMRPACK was used. Categories and Subject Descriptors: F.2.1 [Analysis of Algorithms and Problem Complexity]: Numerical Algorithms and Problemscomputations on matrices; G.1.3 [Numerical Analysis]: Numerical Linear Algebra
A New KrylovSubspace Method For Symmetric Indefinite Linear Systems
, 1994
"... Many important applications involve the solution of large linear systems with symmetric, but indefinite coefficient matrices. For example, such systems arise in incompressible flow computations and as subproblems in optimization algorithms for linear and nonlinear programs. Existing Krylovsubspace ..."
Abstract

Cited by 34 (0 self)
 Add to MetaCart
Many important applications involve the solution of large linear systems with symmetric, but indefinite coefficient matrices. For example, such systems arise in incompressible flow computations and as subproblems in optimization algorithms for linear and nonlinear programs. Existing Krylovsubspace iterations for symmetric indefinite systems, such as SYMMLQ and MINRES, require the use of symmetric positive definite preconditioners, which is a rather unnatural restriction when the matrix itself is highly indefinite with both many positive and many negative eigenvalues. In this note, we describe a new Krylovsubspace iteration for solving symmetric indefinite linear systems that can be combined with arbitrary symmetric preconditioners. The algorithm can be interpreted as a special case of the quasiminimal residual method for general nonHermitian linear systems, and like the latter, it produces iterates defined by a quasiminimal residual property. The proposed method has the same work ...
QMRBased Projection Techniques for the Solution of NonHermitian Systems with Multiple RightHand Sides
, 2001
"... . In this work we consider the simultaneous solution of large linear systems of the form Ax (j) = b (j) ; j = 1; : : : ; K where A is sparse and nonHermitian. We describe singleseed and blockseed projection approaches to these multiple righthand side problems that are based on the QMR and bl ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
. In this work we consider the simultaneous solution of large linear systems of the form Ax (j) = b (j) ; j = 1; : : : ; K where A is sparse and nonHermitian. We describe singleseed and blockseed projection approaches to these multiple righthand side problems that are based on the QMR and block QMR algorithms, respectively. We use (block) QMR to solve the (block) seed system and generate the relevant biorthogonal subspaces. Approximate solutions to the nonseed systems are simultaneously generated by minimizing their appropriately projected (block) residuals. After the initial (block) seed has converged, the process is repeated by choosing a new (block) seed from among the remaining nonconverged systems and using the previously generated approximate solutions as initial guesses for the new seed and nonseed systems. We give theory for the singleseed case that helps explain the convergence behavior under certain conditions. Implementation details for both the singleseed and b...
A class of spectral twolevel preconditioners
 in SISC
, 2003
"... When solving the linear system Ax = b with a Krylov method, the smallest eigenvalues of the matrix A often slow down the convergence. In the SPD case, this is clearly highlighted by the bound on the rate of convergence of the Conjugate Gradient method (CG) given by e (k) √ κ(A) − 1 A ≤ ( √) κ ..."
Abstract

Cited by 17 (8 self)
 Add to MetaCart
When solving the linear system Ax = b with a Krylov method, the smallest eigenvalues of the matrix A often slow down the convergence. In the SPD case, this is clearly highlighted by the bound on the rate of convergence of the Conjugate Gradient method (CG) given by e (k) √ κ(A) − 1 A ≤ ( √) κ(A) + 1 k e (0) A, (1) where e (k) = x ∗ − x (k) denotes the forward error associated with the iterate at step k and κ(A) = λmax denotes the condition number. From this bound it can be said that enlarging λmin the smallest eigenvalues would improve the convergence rate of CG. Consequently if the smallest eigenvalues of A could be somehow “removed ” the convergence of CG will be improved. Similarly for unsymmetric systems arguments exist to explain the bad effect of the smallest eigenvalues on the rate of convergence of the unsymmetric Krylov solver [1, 3, 5]. To cure this, several techniques have been proposed in the last few years, mainly to improve the convergence of GMRES. In [5], it is proposed to add a basis of the invariant space associated with the smallest eigenvalues to the Krylov basis generated by GMRES. Another approach based on a low rank update of the preconditionner for GMRES was proposed by [1, 3]. They consider the orthogonal complement of the invariant subspace associated with the smallest eigenvalues to build a low rank update of the preconditioned system. Finally, in [4] a preconditioner for GMRES based on a sequence of rankone updates is proposed that involves the left and right smallest eigenvectors. In our work, we consider an explicit eigencomputation which makes the preconditioner independent of the Krylov solver used in the actual solution of the linear system. We first present our techniques for unsymmetric linear systems and then derive a variant for symmetric and SPD matrices. We consider the solution of the linear system Ax = b, (2) where A is a n × n unsymmetric non singular matrix, x and b are vectors of size n. The linear system is solved using a preconditioned Krylov solver and we denote by M1 the left preconditioner, meaning that we solve