Results 1 
7 of
7
Krylov Projection Methods For Model Reduction
, 1997
"... This dissertation focuses on efficiently forming reducedorder models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reducedorder models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov p ..."
Abstract

Cited by 119 (3 self)
 Add to MetaCart
This dissertation focuses on efficiently forming reducedorder models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reducedorder models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov projection to rational interpolation. Based on this theoretical framework, three algorithms for model reduction are proposed. The first algorithm, dual rational Arnoldi, is a numerically reliable approach involving orthogonal projection matrices. The second, rational Lanczos, is an efficient generalization of existing Lanczosbased methods. The third, rational power Krylov, avoids orthogonalization and is suited for parallel or approximate computations. The performance of the three algorithms is compared via a combination of theory and examples. Independent of the precise algorithm, a host of supporting tools are also developed to form a complete modelreduction package. Techniques for choosing the matching frequencies, estimating the modeling error, insuring the model's stability, treating multipleinput multipleoutput systems, implementing parallelism, and avoiding a need for exact factors of large matrix pencils are all examined to various degrees.
ReducedOrder Modeling Techniques Based on Krylov Subspaces and Their Use in Circuit Simulation
 Applied and Computational Control, Signals, and Circuits
, 1998
"... In recent years, reducedorder modeling techniques based on Krylovsubspace iterations, especially the Lanczos algorithm and the Arnoldi process, have become popular tools to tackle the largescale timeinvariant linear dynamical systems that arise in the simulation of electronic circuits. This pape ..."
Abstract

Cited by 53 (10 self)
 Add to MetaCart
In recent years, reducedorder modeling techniques based on Krylovsubspace iterations, especially the Lanczos algorithm and the Arnoldi process, have become popular tools to tackle the largescale timeinvariant linear dynamical systems that arise in the simulation of electronic circuits. This paper reviews the main ideas of reducedorder modeling techniques based on Krylov subspaces and describes the use of reducedorder modeling in circuit simulation. 1 Introduction Krylovsubspace methods, most notably the Lanczos algorithm [81, 82] and the Arnoldi process [5], have long been recognized as powerful tools for largescale matrix computations. Matrices that occur in largescale computations usually have some special structures that allow to compute matrixvector products with such a matrix (or its transpose) much more efficiently than for a dense, unstructured matrix. The most common structure is sparsity, i.e., only few of the matrix entries are nonzero. Computing a matrixvector pr...
A Fast Algorithm for Joint Diagonalization with Nonorthogonal Transformations and its Application to Blind Source Separation
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2004
"... A new efficient algorithm is presented for joint diagonalization of several matrices. The algorithm is based on the Frobeniusnorm formulation of the joint diagonalization problem, and addresses diagonalization with a general, nonorthogonal transformation. The iterative scheme of the algorithm i ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
A new efficient algorithm is presented for joint diagonalization of several matrices. The algorithm is based on the Frobeniusnorm formulation of the joint diagonalization problem, and addresses diagonalization with a general, nonorthogonal transformation. The iterative scheme of the algorithm is based on a multiplicative update which ensures the invertibility of the diagonalizer. The algorithm 's efficiency stems from the special approximation of the cost function resulting in a sparse, blockdiagonal Hessian to be used in the computation of the quasiNewton update step. Extensive numerical simulations illustrate the performance of the algorithm and provide a comparison to other leading diagonalization methods. The results of such comparison demonstrate that the proposed algorithm is a viable alternative to existing stateoftheart joint diagonalization algorithms.
Direct Methods
, 1998
"... We review current methods for the direct solution of sparse linear equations. We discuss basic concepts such as fillin, sparsity orderings, indirect addressing and compare general sparse codes with codes for dense systems. We examine methods for greatly increasing the efficiency when the matrix is ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We review current methods for the direct solution of sparse linear equations. We discuss basic concepts such as fillin, sparsity orderings, indirect addressing and compare general sparse codes with codes for dense systems. We examine methods for greatly increasing the efficiency when the matrix is symmetric positive definite. We consider frontal and multifrontal methods emphasizing how they can take advantage of vectorization, RISC architectures, and parallelism. Some comparisons are made with other techniques and the availability of software for the direct solution of sparse equations is discussed.
A Continuous Method for Extreme Eigenvalue Problems
"... In this paper, a continuous method is introduced to compute both the extreme eigenvalues and their corresponding eigenvectors for a real symmetric matrix. The main idea is to convert the extreme eigenvalue problem into an optimization problem. Then a continuous method which includes both a merit fun ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this paper, a continuous method is introduced to compute both the extreme eigenvalues and their corresponding eigenvectors for a real symmetric matrix. The main idea is to convert the extreme eigenvalue problem into an optimization problem. Then a continuous method which includes both a merit function and an ordinary dierential equation (ode) is introduced for the resulting optimization problem. The convergence of the ode solution is proved for any starting point. The limit of the ode solution for any starting point is fully studied. Both the extreme eigenvalues and their corresponding eigenvectors can be easily obtained under a very mild condition. Promising numerical results are also presented.
Parallel Computational MagnetoFluid Dynamics
, 1998
"... this report will be on the computationally challenging applications that we claimed to tackle at the start of our activities. Various hydrodynamic and magnetohydrodynamic physics issues can now be studied systematically. iii iv 1 Update on the Cluster Project ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
this report will be on the computationally challenging applications that we claimed to tackle at the start of our activities. Various hydrodynamic and magnetohydrodynamic physics issues can now be studied systematically. iii iv 1 Update on the Cluster Project