Results 1 
9 of
9
Design of a Parallel Nonsymmetric Eigenroutine Toolbox, Part I
, 1993
"... The dense nonsymmetric eigenproblem is one of the hardest linear algebra problems to solve effectively on massively parallel machines. Rather than trying to design a "black box" eigenroutine in the spirit of EISPACK or LAPACK, we propose building a toolbox for this problem. The tools are meant to ..."
Abstract

Cited by 63 (14 self)
 Add to MetaCart
The dense nonsymmetric eigenproblem is one of the hardest linear algebra problems to solve effectively on massively parallel machines. Rather than trying to design a "black box" eigenroutine in the spirit of EISPACK or LAPACK, we propose building a toolbox for this problem. The tools are meant to be used in different combinations on different problems and architectures. In this paper, we will describe these tools which include basic block matrix computations, the matrix sign function, 2dimensional bisection, and spectral divide and conquer using the matrix sign function to find selected eigenvalues. We also outline how we deal with illconditioning and potential instability. Numerical examples are included. A future paper will discuss error analysis in detail and extensions to the generalized eigenproblem.
A Parallelizable Eigensolver for Real Diagonalizable Matrices with Real Eigenvalues
, 1991
"... . In this paper, preliminary research results on a new algorithm for finding all the eigenvalues and eigenvectors of a real diagonalizable matrix with real eigenvalues are presented. The basic mathematical theory behind this approach is reviewed and is followed by a discussion of the numerical consi ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
. In this paper, preliminary research results on a new algorithm for finding all the eigenvalues and eigenvectors of a real diagonalizable matrix with real eigenvalues are presented. The basic mathematical theory behind this approach is reviewed and is followed by a discussion of the numerical considerations of the actual implementation. The numerical algorithm has been tested on thousands of matrices on both a Cray2 and an IBM RS/6000 Model 580 workstation. The results of these tests are presented. Finally, issues concerning the parallel implementation of the algorithm are discussed. The algorithm's heavy reliance on matrixmatrix multiplication, coupled with the divide and conquer nature of this algorithm, should yield a highly parallelizable algorithm. 1. Introduction. Computation of all the eigenvalues and eigenvectors of a dense matrix is essential for solving problems in many fields. The everincreasing computational power available from modern supercomputers offers the potenti...
The Matrix Sign Function Method And The Computation Of Invariant Subspaces
 SIAM J. Matrix Anal. Applicat
, 1994
"... . A perturbation analysis shows that if a numerically stable procedure is used to compute the matrix sign function, then it is competitive with conventional methods for computing invariant subspaces. Stability analysis of the Newton iteration improves an earlier result of Byers and confirms that ill ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
. A perturbation analysis shows that if a numerically stable procedure is used to compute the matrix sign function, then it is competitive with conventional methods for computing invariant subspaces. Stability analysis of the Newton iteration improves an earlier result of Byers and confirms that illconditioned iterates may cause numerical instability. Numerical examples demonstrate the theoretical results. 1. Introduction. If A 2 R n\Thetan has no eigenvalue on the imaginary axis, then the matrix sign function sign(A) may be defined as sign(A) = 1 ßi Z fl (zI \Gamma A) \Gamma1 dz \Gamma I; (1) where fl is any simple closed curve in the complex plane enclosing all eigenvalues of A with positive real part. The sign function is used to compute eigenvalues and invariant subspaces [2, 4, 6, 13, 14] and to solve Riccati and Sylvester equations [9, 15, 16, 28]. The matrix sign function is attractive for machine computation, because it can be efficiently evaluated by relatively simp...
Numerical Methods for Algebraic Riccati Equations
 Proc. Workshop on the Riccati Equation in Control, Systems, and Signals
, 1989
"... Linear quadratic optimal control problems and the computation of Kalman filters require numerical solutions of discrete and continuous algebraic Riccati equations. ..."
Abstract

Cited by 19 (17 self)
 Add to MetaCart
Linear quadratic optimal control problems and the computation of Kalman filters require numerical solutions of discrete and continuous algebraic Riccati equations.
Parallel Performance of a Symmetric Eigensolver based on the Invariant Subspace Decomposition Approach
, 1994
"... In this paper, we discuss work in progress on a complete eigensolver based on the Invariant Subspace Decomposition Algorithm for dense symmetric matrices (SYISDA). We describe a recently developed acceleration technique that substantially reduces the overall work required by this algorithm and revie ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
In this paper, we discuss work in progress on a complete eigensolver based on the Invariant Subspace Decomposition Algorithm for dense symmetric matrices (SYISDA). We describe a recently developed acceleration technique that substantially reduces the overall work required by this algorithm and review the algorithmic highlights of a distributedmemory implementation of this approach. These include a fast matrixmatrix multiplication algorithm, a new approach to parallel band reduction and tridiagonalization, and a harness for coordinating the divideandconquer parallelism in the problem. We present performance results for the dominant kernel, dense matrix multiplication, as well as for the overall SYISDA implementation on the Intel Touchstone Delta and the Intel Paragon. 1. Introduction Computation of eigenvalues and eigenvectors is an essential kernel in many applications, and several promising parallel algorithms have been investigated [26, 3, 28, 22, 25, 6]. The work presented in t...
Numerical stability and instability in matrix sign function based algorithms
 Computational and Combinatorial Methods in Systems Theory
, 1986
"... This paper uses a forward and backward error analysis to try to identify some classes of matrices for . P which the matrix sign function is a numerically stable algorithm for extracting invariant subspaces roper scaling is essential to numerical stability as well as to rapid convergence. g a Roberts ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
This paper uses a forward and backward error analysis to try to identify some classes of matrices for . P which the matrix sign function is a numerically stable algorithm for extracting invariant subspaces roper scaling is essential to numerical stability as well as to rapid convergence. g a Roberts [21] and Beavers and Denman [7] introduced the matrix sign function as a means of solvin lgebraic Riccati equations and Lyapunov equations. The matrix sign function has since attracted the ] a attention of control engineers and some applied mathematicians ([1] to [21]). Balzer [3], Barraud [5 nd Byers [9] have suggested strategies for accelerating convergence. Denman and Beavers [11]   0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002 extended matrix sign function algorithms to a list of invariant subspace related calculations. Howland e a [16] used the matrix sign function to count eigenvalues in boxes in the complex plane. Some of th lgorithms have been refined and extended by Attarzadeh [2], Bierman [8], Byers [9]. Gardiner and d d Laub [14] have extended the use of the matrix sign function to generalized Riccati equations an iscrete Riccati equations. Higham [15] has used matrix sign function techniques to calculate polar decompositions
The PRISM Project: Infrastructure and Algorithms for Parallel Eigensolvers
, 1994
"... The goal of the PRISM project is the development of infrastructure and algorithms for the parallel solution of eigenvalue problems. We are currently investigating a complete eigensolver based on the Invariant Subspace Decomposition Algorithm for dense symmetric matrices (SYISDA). After briefly revie ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
The goal of the PRISM project is the development of infrastructure and algorithms for the parallel solution of eigenvalue problems. We are currently investigating a complete eigensolver based on the Invariant Subspace Decomposition Algorithm for dense symmetric matrices (SYISDA). After briefly reviewing SYISDA, we discuss the algorithmic highlights of a distributedmemory implementation of this approach. These include a fast matrixmatrix multiplication algorithm, a new approach to parallel band reduction and tridiagonalization, and a harness for coordinating the divideandconquer parallelism in the problem. We also present performance results of these kernels as well as the overall SYISDA implementation on the Intel Touchstone Delta prototype. 1. Introduction Computation of eigenvalues and eigenvectors is an essential kernel in many applications, and several promising parallel algorithms have been investigated [29, 24, 3, 27, 21]. The work presented in this paper is part of the PRI...
Disk Functions And Their Relationship To The Matrix Sign Function
, 1997
"... This short paper investigates a generalization of the matrix sign function to matrix pencils. 1 Introduction The problem of extracting an invariant subspace of a matrix or a deflating subspace of a matrix pencil arises in many control computations including solving Lyapunov, Sylvester, and Riccati ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
This short paper investigates a generalization of the matrix sign function to matrix pencils. 1 Introduction The problem of extracting an invariant subspace of a matrix or a deflating subspace of a matrix pencil arises in many control computations including solving Lyapunov, Sylvester, and Riccati equations [16, 18, 19, 32, 38] and computing H1 norms [7, 6]. Numerical methods related to the matrix sign function are particularly attractive for machines with advanced architectures [2, 16, 27]. The matrix sign function [37, 38] has many equivalent definitions [21, 26]. One of the more convenient (but less common) definitions is the following. The sign of a matrix A 2 R n\Thetan is the antistabilizing solution S = sign(A) to the (nonsymmetric) algebraic Riccati equation A \Gamma SAS = 0; (1) i.e., the solution for which the eigenvalues of AS lie in the open right half plane. (Equation (1) is related to work in [23, 27].) The "quadratic formula" form of the solution is sign(A) = A \...
Stabilization of Large Linear Systems
, 1994
"... We discuss numerical methods for the stabilization of large linear multiinput control systems of the form _x = Ax + Bu via a feedback of the form u = F x. The method discussed in this paper is a stabilization algorithm that is based on a subspace splitting. This splitting is done via the matrix sig ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We discuss numerical methods for the stabilization of large linear multiinput control systems of the form _x = Ax + Bu via a feedback of the form u = F x. The method discussed in this paper is a stabilization algorithm that is based on a subspace splitting. This splitting is done via the matrix signfunction method. Then a projection into the unstable subspace is performed followed by a stabilization technique via the solution of an appropriate algebraic Riccati equation. There are several possibilities to deal with the freedom in the choice of the feedback as well as in the cost functional used in the Riccati equation. We discuss several optimality criteria and show that in special cases the feedback matrix F of minimal spectral norm is obtained via the Riccati equation with the zero constant term. A theoretical analysis about the distance to instability of the closed loop system is given and furthermore numerical examples are presented that support the practical experience with this method.