Results 1  10
of
36
Design of a Parallel Nonsymmetric Eigenroutine Toolbox, Part I
, 1993
"... The dense nonsymmetric eigenproblem is one of the hardest linear algebra problems to solve effectively on massively parallel machines. Rather than trying to design a "black box" eigenroutine in the spirit of EISPACK or LAPACK, we propose building a toolbox for this problem. The tools ar ..."
Abstract

Cited by 65 (13 self)
 Add to MetaCart
The dense nonsymmetric eigenproblem is one of the hardest linear algebra problems to solve effectively on massively parallel machines. Rather than trying to design a "black box" eigenroutine in the spirit of EISPACK or LAPACK, we propose building a toolbox for this problem. The tools are meant to be used in different combinations on different problems and architectures. In this paper, we will describe these tools which include basic block matrix computations, the matrix sign function, 2dimensional bisection, and spectral divide and conquer using the matrix sign function to find selected eigenvalues. We also outline how we deal with illconditioning and potential instability. Numerical examples are included. A future paper will discuss error analysis in detail and extensions to the generalized eigenproblem.
On the Mechanics of Forming and Estimating Dynamic Linear Economies
"... This paper catalogues formulas that are useful for estimating dynamic linear economic models. We describe algorithms for computing equilibria of an economic model and for recursively computing a Gaussian likelihood function and its gradient with respect to parameters. We apply these methods to sever ..."
Abstract

Cited by 64 (14 self)
 Add to MetaCart
This paper catalogues formulas that are useful for estimating dynamic linear economic models. We describe algorithms for computing equilibria of an economic model and for recursively computing a Gaussian likelihood function and its gradient with respect to parameters. We apply these methods to several example economies.
Computing real square roots of a real matrix
 Linear Algebra Appl
, 1987
"... Bjiirck and Hammarling [l] describe a fast, stable Schur method for computing a square root X of a matrix A (X2 = A). We present an extension of their method which enables real arithmetic to be used throughout when computing a real square root of a real matrix. For a nonsingular real matrix A condit ..."
Abstract

Cited by 40 (21 self)
 Add to MetaCart
Bjiirck and Hammarling [l] describe a fast, stable Schur method for computing a square root X of a matrix A (X2 = A). We present an extension of their method which enables real arithmetic to be used throughout when computing a real square root of a real matrix. For a nonsingular real matrix A conditions are given for the existence of a real square root, and for the existence of a real square root which is a polynomial in A; the number of square roots of the latter type is determined. The conditioning of matrix square roots is investigated, and an algorithm is given for the computation of a wellconditioned square root. 1.
Approximating the logarithm of a matrix to specified accuracy
 SIAM J. Matrix Anal. Appl
, 2001
"... Abstract. The standard inverse scaling and squaring algorithm for computing the matrix logarithm begins by transforming the matrix to Schur triangular form in order to facilitate subsequent matrix square root and Padé approximation computations. A transformationfree form of this method that exploit ..."
Abstract

Cited by 37 (18 self)
 Add to MetaCart
Abstract. The standard inverse scaling and squaring algorithm for computing the matrix logarithm begins by transforming the matrix to Schur triangular form in order to facilitate subsequent matrix square root and Padé approximation computations. A transformationfree form of this method that exploits incomplete Denman–Beavers square root iterations and aims for a specified accuracy (ignoring roundoff) is presented. The error introduced by using approximate square roots is accounted for by a novel splitting lemma for logarithms of matrix products. The number of square root stages and the degree of the finalPadé approximation are chosen to minimize the computationalwork. This new method is attractive for highperformance computation since it uses only the basic building blocks of matrix multiplication, LU factorization and matrix inversion.
The Matrix Sign Decomposition and its Relation to the Polar Decomposition
, 1994
"... The sign function of a square matrix was introduced by Roberts in 1971. We show that it is useful to regard S = sign(A) as being part of a matrix sign decomposition A = SN , where N = (A 2 ) 1=2 . This decomposition leads to the new representation sign(A) = A(A 2 ) \Gamma1=2 . Most results f ..."
Abstract

Cited by 35 (12 self)
 Add to MetaCart
The sign function of a square matrix was introduced by Roberts in 1971. We show that it is useful to regard S = sign(A) as being part of a matrix sign decomposition A = SN , where N = (A 2 ) 1=2 . This decomposition leads to the new representation sign(A) = A(A 2 ) \Gamma1=2 . Most results for the matrix sign decomposition have a counterpart for the polar decomposition A = UH, and vice versa. To illustrate this, we derive best approximation properties of the factors U , H and S, determine bounds for kA \Gamma Sk and kA \Gamma Uk, and describe integral formulas for S and U . We also derive explicit expressions for the condition numbers of the factors S and N . An important equation expresses the sign of a block 2 \Theta 2 matrix involving A in terms of the polar factor U of A. We apply this equation to a family of iterations for computing S by Pandey, Kenney and Laub, to obtain a new family of iterations for computing U . The iterations have some attractive properties, includin...
A Parallelizable Eigensolver for Real Diagonalizable Matrices with Real Eigenvalues
, 1991
"... . In this paper, preliminary research results on a new algorithm for finding all the eigenvalues and eigenvectors of a real diagonalizable matrix with real eigenvalues are presented. The basic mathematical theory behind this approach is reviewed and is followed by a discussion of the numerical consi ..."
Abstract

Cited by 27 (6 self)
 Add to MetaCart
. In this paper, preliminary research results on a new algorithm for finding all the eigenvalues and eigenvectors of a real diagonalizable matrix with real eigenvalues are presented. The basic mathematical theory behind this approach is reviewed and is followed by a discussion of the numerical considerations of the actual implementation. The numerical algorithm has been tested on thousands of matrices on both a Cray2 and an IBM RS/6000 Model 580 workstation. The results of these tests are presented. Finally, issues concerning the parallel implementation of the algorithm are discussed. The algorithm's heavy reliance on matrixmatrix multiplication, coupled with the divide and conquer nature of this algorithm, should yield a highly parallelizable algorithm. 1. Introduction. Computation of all the eigenvalues and eigenvectors of a dense matrix is essential for solving problems in many fields. The everincreasing computational power available from modern supercomputers offers the potenti...
The Matrix Sign Function Method And The Computation Of Invariant Subspaces
 SIAM J. Matrix Anal. Applicat
, 1994
"... . A perturbation analysis shows that if a numerically stable procedure is used to compute the matrix sign function, then it is competitive with conventional methods for computing invariant subspaces. Stability analysis of the Newton iteration improves an earlier result of Byers and confirms that ill ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
. A perturbation analysis shows that if a numerically stable procedure is used to compute the matrix sign function, then it is competitive with conventional methods for computing invariant subspaces. Stability analysis of the Newton iteration improves an earlier result of Byers and confirms that illconditioned iterates may cause numerical instability. Numerical examples demonstrate the theoretical results. 1. Introduction. If A 2 R n\Thetan has no eigenvalue on the imaginary axis, then the matrix sign function sign(A) may be defined as sign(A) = 1 ßi Z fl (zI \Gamma A) \Gamma1 dz \Gamma I; (1) where fl is any simple closed curve in the complex plane enclosing all eigenvalues of A with positive real part. The sign function is used to compute eigenvalues and invariant subspaces [2, 4, 6, 13, 14] and to solve Riccati and Sylvester equations [9, 15, 16, 28]. The matrix sign function is attractive for machine computation, because it can be efficiently evaluated by relatively simp...
Stable Iterations For The Matrix Square Root
 Numerical Algorithms
, 1997
"... this paper is the tradeoff between speed and stability. The single variable Newton iteration (1.2) is unstable, and the Pad e iteration (2.8) becomes unstable when we attempt to reduce the cost of its implementation. Iterations for the matrix square root appear to be particularly delicate with respe ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
this paper is the tradeoff between speed and stability. The single variable Newton iteration (1.2) is unstable, and the Pad e iteration (2.8) becomes unstable when we attempt to reduce the cost of its implementation. Iterations for the matrix square root appear to be particularly delicate with respect to numerical stability
Numerical methods for algebraic Riccati equations
 In Proc. Workshop on the Riccati Equation in Control, Systems, and Signals
, 1989
"... ..."