Results 1  10
of
19
Design of a Parallel Nonsymmetric Eigenroutine Toolbox, Part I
, 1993
"... The dense nonsymmetric eigenproblem is one of the hardest linear algebra problems to solve effectively on massively parallel machines. Rather than trying to design a "black box" eigenroutine in the spirit of EISPACK or LAPACK, we propose building a toolbox for this problem. The tools are meant to ..."
Abstract

Cited by 63 (14 self)
 Add to MetaCart
The dense nonsymmetric eigenproblem is one of the hardest linear algebra problems to solve effectively on massively parallel machines. Rather than trying to design a "black box" eigenroutine in the spirit of EISPACK or LAPACK, we propose building a toolbox for this problem. The tools are meant to be used in different combinations on different problems and architectures. In this paper, we will describe these tools which include basic block matrix computations, the matrix sign function, 2dimensional bisection, and spectral divide and conquer using the matrix sign function to find selected eigenvalues. We also outline how we deal with illconditioning and potential instability. Numerical examples are included. A future paper will discuss error analysis in detail and extensions to the generalized eigenproblem.
Inverse free parallel spectral divide and conquer algorithms for nonsymmetric eigenproblems
 Numer. Math
, 1994
"... We discuss two inverse free, highly parallel, spectral divide and conquer algorithms: one for computing an invariant subspace of a nonsymmetric matrix and another one for computing left and right de ating subspaces of a regular matrix pencil A, B. These two closely related algorithms are based on ea ..."
Abstract

Cited by 61 (12 self)
 Add to MetaCart
We discuss two inverse free, highly parallel, spectral divide and conquer algorithms: one for computing an invariant subspace of a nonsymmetric matrix and another one for computing left and right de ating subspaces of a regular matrix pencil A, B. These two closely related algorithms are based on earlier ones of Bulgakov, Godunov and Malyshev, but improve on them in several ways. These algorithms only use easily parallelizable linear algebra building blocks: matrix multiplication and QR decomposition. The existing parallel algorithms for the nonsymmetric eigenproblem use the matrix sign function, which is faster but can be less stable than the new algorithm. Appears also as
The spectral decomposition of nonsymmetric matrices on distributed memory parallel computers
 SIAM J. Sci. Comput
, 1997
"... Abstract. The implementation and performance of a class of divideandconquer algorithms for computing the spectral decomposition of nonsymmetric matrices on distributed memory parallel computers are studied in this paper. After presenting a general framework, we focus on a spectral divideandconqu ..."
Abstract

Cited by 31 (11 self)
 Add to MetaCart
Abstract. The implementation and performance of a class of divideandconquer algorithms for computing the spectral decomposition of nonsymmetric matrices on distributed memory parallel computers are studied in this paper. After presenting a general framework, we focus on a spectral divideandconquer (SDC) algorithm with Newton iteration. Although the algorithm requires several times as many floating point operations as the best serial QR algorithm, it can be simply constructed from a small set of highly parallelizable matrix building blocks within Level 3 basic linear algebra subroutines (BLAS). Efficient implementations of these building blocks are available on a wide range of machines. In some illconditioned cases, the algorithm may lose numerical stability, but this can easily be detected and compensated for. The algorithm reached 31 % efficiency with respect to the underlying PUMMA matrix multiplication and 82 % efficiency with respect to the underlying ScaLAPACK matrix inversion on a 256 processor Intel Touchstone Delta system, and 41 % efficiency with respect to the matrix multiplication in CMSSL on a 32 node Thinking Machines CM5 with vector units. Our performance model predicts the performance reasonably accurately. To take advantage of the geometric nature of SDC algorithms, we have designed a graphical user interface to let the user choose the spectral decomposition according to specified regions in the complex plane.
Fast linear algebra is stable
 In preparation
, 2006
"... In [23] we showed that a large class of fast recursive matrix multiplication algorithms is stable in a normwise sense, and that in fact if multiplication of nbyn matrices can be done by any algorithm in O(n ω+η) operations for any η> 0, then it can be done stably in O(n ω+η) operations for any η> ..."
Abstract

Cited by 25 (15 self)
 Add to MetaCart
In [23] we showed that a large class of fast recursive matrix multiplication algorithms is stable in a normwise sense, and that in fact if multiplication of nbyn matrices can be done by any algorithm in O(n ω+η) operations for any η> 0, then it can be done stably in O(n ω+η) operations for any η> 0. Here we extend this result to show that essentially all standard linear algebra operations, including LU decomposition, QR decomposition, linear equation solving, matrix inversion, solving least squares problems, (generalized) eigenvalue problems and the singular value decomposition can also be done stably (in a normwise sense) in O(n ω+η) operations. 1
Evaluating Products of Matrix Pencils and Collapsing Matrix Products
, 2000
"... This paper describes three numerical methods to collapse a formal product of p pairs of matrices P = Q p\Gamma1 k=0 E \Gamma1 k A k down to the product of a single pair E \Gamma1 A. In the setting of linear relations, the product formally extends to the case in which some of the E k 's are s ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
This paper describes three numerical methods to collapse a formal product of p pairs of matrices P = Q p\Gamma1 k=0 E \Gamma1 k A k down to the product of a single pair E \Gamma1 A. In the setting of linear relations, the product formally extends to the case in which some of the E k 's are singular and it is impossible to explicitly form P as a single matrix. The methods differ in flop count, work space, and inherent parallelism. They have in common that they are immune to overflows and use no matrix inversions. A rounding error analysis shows that the special case of collapsing two pairs is numerically backward stable.
Using The Matrix Sign Function To Compute Invariant Subspaces
 SIAM J. Matrix Anal. Appl
, 1998
"... . The matrix sign function has several applications in system theory and matrix computations. However, the numericalbehavior of the matrix sign function, and its associated divideand conquer algorithm for computing invariant subspaces, are still not completely understood. In this paper, we present ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
. The matrix sign function has several applications in system theory and matrix computations. However, the numericalbehavior of the matrix sign function, and its associated divideand conquer algorithm for computing invariant subspaces, are still not completely understood. In this paper, we present a new perturbation theory for the matrix sign function, the conditioning of its computation, the numerical stability of the divideandconquer algorithm, and iterative refinement schemes. Numerical examples are also presented. An extension of the matrix sign function based algorithm to compute left and right deflating subspaces for a regular pair of matrices is also described. Key words. matrix sign function, Newton's method, eigenvalue problem, invariant subspace, deflating subspaces AMS subject classifications. 65F15, 65F35, 65F30, 15A18 1. Introduction. Since the matrix sign function was introduced in early 1970s, it has been the subject of numerous studies and used in many applications...
Solving LinearQuadratic Optimal Control Problems on Parallel Computers
, 2007
"... We discuss a parallel library of efficient algorithms for the solution of linearquadratic optimal control problems involving largescale systems with statespace dimension up to O(10 4). We survey the numerical algorithms underlying the implementation of the chosen optimal control methods. The appr ..."
Abstract

Cited by 11 (10 self)
 Add to MetaCart
We discuss a parallel library of efficient algorithms for the solution of linearquadratic optimal control problems involving largescale systems with statespace dimension up to O(10 4). We survey the numerical algorithms underlying the implementation of the chosen optimal control methods. The approaches considered here are based on invariant and deflating subspace techniques, and avoid the explicit solution of the associated algebraic Riccati equations in case of possible illconditioning. Still, our algorithms can also optionally compute the Riccati solution. The major computational task of finding spectral projectors onto the required invariant or deflating subspaces is implemented using iterative schemes for the sign and disk functions. Experimental results report the numerical accuracy and the parallel performance of our approach on a cluster of Intel Itanium2 processors.
Spectral division methods for block generalized Schur decompositions
, 1996
"... We provide a different perspective of the spectral division methods for block generalized Schur decompositions of matrix pairs. The new approach exposes more algebraic structures of the successive matrix pairs in the spectral division iterations and reveals some potential computational difficulties. ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
We provide a different perspective of the spectral division methods for block generalized Schur decompositions of matrix pairs. The new approach exposes more algebraic structures of the successive matrix pairs in the spectral division iterations and reveals some potential computational difficulties. We present modified algorithms to reduce the arithmetic cost by nearly 50%, remove inconsistency in spectral subspace extraction from different sides (left and right), and improve the accuracy of subspaces. In application problems that only require a singlesided deflating subspace, our algorithms can be used to obtain a posteriori estimates on the backward accuracy of the computed subspaces with little extra cost.
Disk Functions And Their Relationship To The Matrix Sign Function
, 1997
"... This short paper investigates a generalization of the matrix sign function to matrix pencils. 1 Introduction The problem of extracting an invariant subspace of a matrix or a deflating subspace of a matrix pencil arises in many control computations including solving Lyapunov, Sylvester, and Riccati ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
This short paper investigates a generalization of the matrix sign function to matrix pencils. 1 Introduction The problem of extracting an invariant subspace of a matrix or a deflating subspace of a matrix pencil arises in many control computations including solving Lyapunov, Sylvester, and Riccati equations [16, 18, 19, 32, 38] and computing H1 norms [7, 6]. Numerical methods related to the matrix sign function are particularly attractive for machines with advanced architectures [2, 16, 27]. The matrix sign function [37, 38] has many equivalent definitions [21, 26]. One of the more convenient (but less common) definitions is the following. The sign of a matrix A 2 R n\Thetan is the antistabilizing solution S = sign(A) to the (nonsymmetric) algebraic Riccati equation A \Gamma SAS = 0; (1) i.e., the solution for which the eigenvalues of AS lie in the open right half plane. (Equation (1) is related to work in [23, 27].) The "quadratic formula" form of the solution is sign(A) = A \...
On a Criterion for Asymptotic Stability of DifferentialAlgebraic Equations
, 1999
"... This paper discusses Lyapunov stability of the trivial solution of linear differentialalgebraic equations. As a criterion for the asymptotic stability we propose a numerical parameter ae(A; B) characterizing the property of a regular matrix pencil A \Gamma B to have all finite eigenvalues in the op ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
This paper discusses Lyapunov stability of the trivial solution of linear differentialalgebraic equations. As a criterion for the asymptotic stability we propose a numerical parameter ae(A; B) characterizing the property of a regular matrix pencil A \Gamma B to have all finite eigenvalues in the open left halfplane. Numerical aspects for computing this parameter are discussed.