Results 1 
7 of
7
ReducedOrder Modeling Techniques Based on Krylov Subspaces and Their Use in Circuit Simulation
 Applied and Computational Control, Signals, and Circuits
, 1998
"... In recent years, reducedorder modeling techniques based on Krylovsubspace iterations, especially the Lanczos algorithm and the Arnoldi process, have become popular tools to tackle the largescale timeinvariant linear dynamical systems that arise in the simulation of electronic circuits. This pape ..."
Abstract

Cited by 53 (10 self)
 Add to MetaCart
In recent years, reducedorder modeling techniques based on Krylovsubspace iterations, especially the Lanczos algorithm and the Arnoldi process, have become popular tools to tackle the largescale timeinvariant linear dynamical systems that arise in the simulation of electronic circuits. This paper reviews the main ideas of reducedorder modeling techniques based on Krylov subspaces and describes the use of reducedorder modeling in circuit simulation. 1 Introduction Krylovsubspace methods, most notably the Lanczos algorithm [81, 82] and the Arnoldi process [5], have long been recognized as powerful tools for largescale matrix computations. Matrices that occur in largescale computations usually have some special structures that allow to compute matrixvector products with such a matrix (or its transpose) much more efficiently than for a dense, unstructured matrix. The most common structure is sparsity, i.e., only few of the matrix entries are nonzero. Computing a matrixvector pr...
QMRBased Projection Techniques for the Solution of NonHermitian Systems with Multiple RightHand Sides
, 2001
"... . In this work we consider the simultaneous solution of large linear systems of the form Ax (j) = b (j) ; j = 1; : : : ; K where A is sparse and nonHermitian. We describe singleseed and blockseed projection approaches to these multiple righthand side problems that are based on the QMR and bl ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
. In this work we consider the simultaneous solution of large linear systems of the form Ax (j) = b (j) ; j = 1; : : : ; K where A is sparse and nonHermitian. We describe singleseed and blockseed projection approaches to these multiple righthand side problems that are based on the QMR and block QMR algorithms, respectively. We use (block) QMR to solve the (block) seed system and generate the relevant biorthogonal subspaces. Approximate solutions to the nonseed systems are simultaneously generated by minimizing their appropriately projected (block) residuals. After the initial (block) seed has converged, the process is repeated by choosing a new (block) seed from among the remaining nonconverged systems and using the previously generated approximate solutions as initial guesses for the new seed and nonseed systems. We give theory for the singleseed case that helps explain the convergence behavior under certain conditions. Implementation details for both the singleseed and b...
Iterative methods for solving Ax = b: GMRES/FOM versus QMR/BiCG
 Advances in Computational Mathematics
, 1996
"... We study the convergence of GMRES/FOM and QMR/BiCG methods for solving nonsymmetric Ax = b. We prove that given the results of a BiCG computation on Ax = b, we can obtain a matrix B with the same eigenvalues as A and a vector c such that the residual norms generated by a FOM computation on Bx = c ar ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We study the convergence of GMRES/FOM and QMR/BiCG methods for solving nonsymmetric Ax = b. We prove that given the results of a BiCG computation on Ax = b, we can obtain a matrix B with the same eigenvalues as A and a vector c such that the residual norms generated by a FOM computation on Bx = c are identical to those generated by the BiCG computations. Using a unitary equivalence for each of these methods, we obtain test problems where we can easily vary certain spectral properties of the matrices. We use these test problems to study the effects of nonnormality on the convergence of GMRES and QMR, to study the effects of eigenvalue outliers on the convergence of QMR, and to compare the convergence of restarted GMRES and QMR across a family of normal and nonnormal problems. Our GMRES tests on nonnormal test matrices indicate that nonnormality can have unexpected effects upon the residual norm convergence, giving misleading indications of superior convergence when the error norms for G...
Analytic models of the quantum harmonic oscillator
 Contemp. Math
, 1997
"... Abstract. There are many examples where nonorthogonality of a basis for Krylov subspace methods arises naturally. These methods usually require less storage or computational effort per iteration than methods using an orthonormal basis (optimal methods), but the convergence may be delayed. Truncated ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract. There are many examples where nonorthogonality of a basis for Krylov subspace methods arises naturally. These methods usually require less storage or computational effort per iteration than methods using an orthonormal basis (optimal methods), but the convergence may be delayed. Truncated Krylov subspace methods and other examples of nonoptimal methods have been shown to converge in many situations, often with small delay, but not in others. We explore the question of what is the effect of having a nonoptimal basis. We prove certain identities for the relative residual gap, i.e., the relative difference between the residuals of the optimal and nonoptimal methods. These identities and related bounds provide insight into when the delay is small and convergence is achieved. Further understanding is gained by using a general theory of superlinear convergence recently developed. Our analysis confirms the observed fact that in exact arithmetic the orthogonality of the basis is not important, only the need to maintain linear independence is. Numerical examples illustrate our theoretical results.
Loss of Biorthogonality and Linear System Solvers
, 1998
"... This paper is devoted to the analysis of the behaviour, in finite precision arithmetic, of the quasiminimal residual method without look ahead for the solution of linear systems Ax = b where A is a real or complex matrix of order n. In exact arithmetic, the behaviour of the method as applied to sym ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper is devoted to the analysis of the behaviour, in finite precision arithmetic, of the quasiminimal residual method without look ahead for the solution of linear systems Ax = b where A is a real or complex matrix of order n. In exact arithmetic, the behaviour of the method as applied to symmetric definite matrices is well understood; but results about convergence usually suppose that the basis generated during the iterative process satisfies the property required in exact arithmetic. This is not the case when we run the computation using finite precision arithmetic, when the orthogonality or biorthogonality condition may no longer be satisfied. We show how some distributions of eigenvalues may slow down the convergence of the iterative method, and we consider also a technique that may improve the convergence. Key words. linear system of equations, sparse matrix, Lanczos algorithm, biorthogonalization algorithm, quasiminimal residual method AMS(MOS) subject classifications. ...
Chapter XI
"... Introduction In theory and practical applications one may encounter eigenproblems that are more complicated than the standard and generalized eigenproblems discussed in previous chapters. In this chapter we will discuss a particular class of eigenproblems: polynomial eigenproblems, with focus on th ..."
Abstract
 Add to MetaCart
Introduction In theory and practical applications one may encounter eigenproblems that are more complicated than the standard and generalized eigenproblems discussed in previous chapters. In this chapter we will discuss a particular class of eigenproblems: polynomial eigenproblems, with focus on the quadratic case. Furthermore, we will discuss briefly a socalled constrained eigenproblem. In Section 56 we will pay attention to the important class of quadratic eigenproblems, with a small sidestep to higher order polynomial eigenproblems. These quadratic eigenproblems are of the form M + C + K)x = 0; (55.1) where M , C, and K are given square matrices of order n. Solutions ; x, with a scalar and x 6= 0 an nvector, are the eigenvalues and eigenvectors of the given problem. There are three basic approaches for the solution of a quadratic eigenproblem: ffl Rewrite the problem as a generalized eigenvalue problem of order 2n, see Section 56. A drawback of this approach is that the di