Results 1 
9 of
9
Krylov Projection Methods For Model Reduction
, 1997
"... This dissertation focuses on efficiently forming reducedorder models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reducedorder models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov p ..."
Abstract

Cited by 124 (3 self)
 Add to MetaCart
This dissertation focuses on efficiently forming reducedorder models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reducedorder models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov projection to rational interpolation. Based on this theoretical framework, three algorithms for model reduction are proposed. The first algorithm, dual rational Arnoldi, is a numerically reliable approach involving orthogonal projection matrices. The second, rational Lanczos, is an efficient generalization of existing Lanczosbased methods. The third, rational power Krylov, avoids orthogonalization and is suited for parallel or approximate computations. The performance of the three algorithms is compared via a combination of theory and examples. Independent of the precise algorithm, a host of supporting tools are also developed to form a complete modelreduction package. Techniques for choosing the matching frequencies, estimating the modeling error, insuring the model's stability, treating multipleinput multipleoutput systems, implementing parallelism, and avoiding a need for exact factors of large matrix pencils are all examined to various degrees.
Reducedorder modeling techniques based on Krylov subspaces and their use in circuit simulation
, 1998
"... ..."
A Fast Algorithm for Joint Diagonalization with Nonorthogonal Transformations and its Application to Blind Source Separation
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2004
"... A new efficient algorithm is presented for joint diagonalization of several matrices. The algorithm is based on the Frobeniusnorm formulation of the joint diagonalization problem, and addresses diagonalization with a general, nonorthogonal transformation. The iterative scheme of the algorithm i ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
A new efficient algorithm is presented for joint diagonalization of several matrices. The algorithm is based on the Frobeniusnorm formulation of the joint diagonalization problem, and addresses diagonalization with a general, nonorthogonal transformation. The iterative scheme of the algorithm is based on a multiplicative update which ensures the invertibility of the diagonalizer. The algorithm 's efficiency stems from the special approximation of the cost function resulting in a sparse, blockdiagonal Hessian to be used in the computation of the quasiNewton update step. Extensive numerical simulations illustrate the performance of the algorithm and provide a comparison to other leading diagonalization methods. The results of such comparison demonstrate that the proposed algorithm is a viable alternative to existing stateoftheart joint diagonalization algorithms.
Matrix Methods
, 1998
"... We consider techniques for the solution of linear systems and eigenvalue problems. We are concerned with largescale applications where the matrix will be large and sparse. We discuss both direct and iterative techniques for the solution of sparse equations, contrasting their strengths and weaknesse ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We consider techniques for the solution of linear systems and eigenvalue problems. We are concerned with largescale applications where the matrix will be large and sparse. We discuss both direct and iterative techniques for the solution of sparse equations, contrasting their strengths and weaknesses and emphasizing that combinations of both are necessary in the arsenal of the applications scientist. We briefly review matrix diagonalization techniques for largescale problems.
A Continuous Method for Extreme Eigenvalue Problems
"... In this paper, a continuous method is introduced to compute both the extreme eigenvalues and their corresponding eigenvectors for a real symmetric matrix. The main idea is to convert the extreme eigenvalue problem into an optimization problem. Then a continuous method which includes both a merit fun ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this paper, a continuous method is introduced to compute both the extreme eigenvalues and their corresponding eigenvectors for a real symmetric matrix. The main idea is to convert the extreme eigenvalue problem into an optimization problem. Then a continuous method which includes both a merit function and an ordinary dierential equation (ode) is introduced for the resulting optimization problem. The convergence of the ode solution is proved for any starting point. The limit of the ode solution for any starting point is fully studied. Both the extreme eigenvalues and their corresponding eigenvectors can be easily obtained under a very mild condition. Promising numerical results are also presented.
Parallel Computational MagnetoFluid Dynamics
, 1998
"... this report will be on the computationally challenging applications that we claimed to tackle at the start of our activities. Various hydrodynamic and magnetohydrodynamic physics issues can now be studied systematically. iii iv 1 Update on the Cluster Project ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
this report will be on the computationally challenging applications that we claimed to tackle at the start of our activities. Various hydrodynamic and magnetohydrodynamic physics issues can now be studied systematically. iii iv 1 Update on the Cluster Project
Blind Source Separation based on Joint Diagonalization of Matrices with Applications in Biomedical Signal Processing
, 2005
"... zur Erlangung des akademischen Grades doctor rerum naturalium – Dr. rer. nat. – eingereicht an der ..."
Abstract
 Add to MetaCart
zur Erlangung des akademischen Grades doctor rerum naturalium – Dr. rer. nat. – eingereicht an der
A Nonlinear Multigrid Eigenproblem Solver for the Complex Helmholtz Equation
, 1997
"... The paper is motivated by the need for a fast robust adaptive multigrid method to solve complex Helmholtz... ..."
Abstract
 Add to MetaCart
The paper is motivated by the need for a fast robust adaptive multigrid method to solve complex Helmholtz...
A Continuous Method for Interior Eigenvalue Problems
"... In this paper, a continuous method is introduced to compute any interior eigenvalue and its corresponding eigenvector for a real symmetric matrix. The main idea is to convert the interior eigenvalue problem into a constrained optimization problem. Then a continuous method which includes both a merit ..."
Abstract
 Add to MetaCart
In this paper, a continuous method is introduced to compute any interior eigenvalue and its corresponding eigenvector for a real symmetric matrix. The main idea is to convert the interior eigenvalue problem into a constrained optimization problem. Then a continuous method which includes both a merit function and an ordinary dierential equation (ode) is introduced for the resulting optimization problem. The convergence of the ode solution is proved for any starting point. The limit of the ode solution provides the desired interior eigenvalue and its corresponding eigenvector. Promising numerical results are also presented.