Results 1  10
of
89
Applied Numerical Linear Algebra
 Society for Industrial and Applied Mathematics
, 1997
"... We survey general techniques and open problems in numerical linear algebra on parallel architectures. We rst discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing e cient algorithms. We illustrate ..."
Abstract

Cited by 531 (26 self)
 Add to MetaCart
We survey general techniques and open problems in numerical linear algebra on parallel architectures. We rst discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing e cient algorithms. We illustrate these principles using current architectures and software systems, and by showing how one would implement matrix multiplication. Then, we present direct and iterative algorithms for solving linear systems of equations, linear least squares problems, the symmetric eigenvalue problem, the nonsymmetric eigenvalue problem, and the singular value decomposition. We consider dense, band and sparse matrices.
Computing the Singular Value Decomposition with High Relative Accuracy
 Linear Algebra Appl
, 1997
"... We analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the a ..."
Abstract

Cited by 55 (12 self)
 Add to MetaCart
We analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the absolute accuracy provided by conventional backward stable algorithms, whichin general only guarantee correct digits in the singular values with large enough magnitudes. It is of interest to compute the tiniest singular values with several correct digits, because in some cases, such as #nite element problems and quantum mechanics, it is the smallest singular values that havephysical meaning, and should be determined accurately by the data. Many recent papers have identi#ed special classes of matrices where high relative accuracy is possible, since it is not possible in general. The perturbation theory and algorithms for these matrix classes have been quite di#erent, motivating us to seek a co...
Numerical Computation of an Analytic Singular Value Decomposition of a Matrix Valued Function
 Numer. Math
, 1991
"... This paper extends the singular value decomposition to a path of matrices E(t). An analytic singular value decomposition of a path of matrices E(t) is an analytic path of factorizations E(t) = X(t)S(t)Y (t) T where X(t) and Y (t) are orthogonal and S(t) is diagonal. To maintain differentiability ..."
Abstract

Cited by 44 (6 self)
 Add to MetaCart
This paper extends the singular value decomposition to a path of matrices E(t). An analytic singular value decomposition of a path of matrices E(t) is an analytic path of factorizations E(t) = X(t)S(t)Y (t) T where X(t) and Y (t) are orthogonal and S(t) is diagonal. To maintain differentiability the diagonal entries of S(t) are allowed to be either positive or negative and to appear in any order. This paper investigates existence and uniqueness of analytic SVD's and develops an algorithm for computing them. We show that a real analytic path E(t) always admits a real analytic SVD, a fullrank, smooth path E(t) with distinct singular values admits a smooth SVD. We derive a differential equation for the left factor, develop Eulerlike and extrapolated Eulerlike numerical methods for approximating an analytic SVD and prove that the Eulerlike method converges. 1 Introduction A singular value decomposition (SVD) of a constant matrix E 2 R m\Thetan , m n, is a factorization E = U...
Numerical Methods for Simultaneous Diagonalization
 SIAM J. Matrix Anal. Applicat
, 1993
"... We present a Jacobilike algorithm for simultaneous diagonalization of commuting pairs of complex normal matrices by unitary similarity transformations. The algorithm uses a sequence of similarity transformations by elementary complex rotations to drive the offdiagonal entries to zero. We show th ..."
Abstract

Cited by 37 (0 self)
 Add to MetaCart
We present a Jacobilike algorithm for simultaneous diagonalization of commuting pairs of complex normal matrices by unitary similarity transformations. The algorithm uses a sequence of similarity transformations by elementary complex rotations to drive the offdiagonal entries to zero. We show that its asymptotic convergence rate is quadratic and that it is numerically stable. It preserves the special structure of real matrices, quaternion matrices and real symmetric matrices.
Large Dense Numerical Linear Algebra in 1993: The Parallel Computing Influence
 International Journal Supercomputer Applications
, 1994
"... This paper surveys the current state of applications of large dense numerical linear algebra, and the influence of parallel computing. Furthermore, we attempt to crystalize many important ideas that we feel have been sometimes been misunderstood in the rush to write fast programs. 1 Introduction Th ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
This paper surveys the current state of applications of large dense numerical linear algebra, and the influence of parallel computing. Furthermore, we attempt to crystalize many important ideas that we feel have been sometimes been misunderstood in the rush to write fast programs. 1 Introduction This paper represents my continuing efforts to track the status of large dense linear algebra problems. The goal is to shatter the barriers that separate the various interested communities while commenting on the influence of parallel computing. A secondary goal is to crystalize the most important ideas that have all too often been obscured by the details of machines and algorithms. Parallel supercomputing is in the spotlight. In the race towards the proliferation of papers on person X's experiences with machine Y (and why his algorithm runs faster than person Z's), sometimes we have lost sight of the applications for which these algorithms are meant to be useful. This paper concentrates on la...
NEW FAST AND ACCURATE JACOBI SVD ALGORITHM: II
, 2002
"... This paper presents new implementation of one–sided Jacobi SVD for triangular matrices and its use as the core routine in a new preconditioned Jacobi SVD algorithm, recently proposed by the authors. New pivot strategy exploits the triangular form and uses the fact that the input triangular matrix i ..."
Abstract

Cited by 32 (3 self)
 Add to MetaCart
This paper presents new implementation of one–sided Jacobi SVD for triangular matrices and its use as the core routine in a new preconditioned Jacobi SVD algorithm, recently proposed by the authors. New pivot strategy exploits the triangular form and uses the fact that the input triangular matrix is the result of rank revealing QR factorization. If used in the preconditioned Jacobi SVD algorithm, it delivers superior performance leading to the currently fastest method for computing SVD decomposition with high relative accuracy. Furthermore, the efficiency of the new algorithm is comparable to the less accurate bidiagonalization based methods. The paper also discusses underflow issues in floating point implementation, and shows how to use perturbation theory to fix the imperfectness of machine arithmetic on some systems.
Relative perturbation theory: (ii) eigenspace and singular subspace variations
 SIAM J. Matrix Anal. Appl
, 1998
"... The classical perturbation theory for Hermitian matrix eigenvalue and singular value problems provides bounds on invariant subspace variations that are proportional to the reciprocals of absolute gaps between subsets of spectra or subsets of singular values. These bounds may be bad news for invarian ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
The classical perturbation theory for Hermitian matrix eigenvalue and singular value problems provides bounds on invariant subspace variations that are proportional to the reciprocals of absolute gaps between subsets of spectra or subsets of singular values. These bounds may be bad news for invariant subspaces corresponding to clustered eigenvalues or clustered singular values of much smaller magnitudes than the norms of matrices under considerations when some of these clustered eigenvalues or clustered singular values are perfectly relatively distinguishable from the rest. In this paper, we consider how eigenspaces of a Hermitian matrix A change when it is perturbed toe A = D AD and how singular values of a (nonsquare) matrix B change when it is perturbed toe B = D1 BD2, where D, D1 and D2 are assumed to be close to identity matrices of suitable dimensions, or either D1 or D2 close to some unitary matrix. It is proved that under these kinds of perturbations, the change of invariant subspaces are proportional to the reciprocals of relative gaps between subsets of spectra or subsets of singular values. We have been able to extend wellknown DavisKahan
A Parallelizable Eigensolver for Real Diagonalizable Matrices with Real Eigenvalues
, 1991
"... . In this paper, preliminary research results on a new algorithm for finding all the eigenvalues and eigenvectors of a real diagonalizable matrix with real eigenvalues are presented. The basic mathematical theory behind this approach is reviewed and is followed by a discussion of the numerical consi ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
. In this paper, preliminary research results on a new algorithm for finding all the eigenvalues and eigenvectors of a real diagonalizable matrix with real eigenvalues are presented. The basic mathematical theory behind this approach is reviewed and is followed by a discussion of the numerical considerations of the actual implementation. The numerical algorithm has been tested on thousands of matrices on both a Cray2 and an IBM RS/6000 Model 580 workstation. The results of these tests are presented. Finally, issues concerning the parallel implementation of the algorithm are discussed. The algorithm's heavy reliance on matrixmatrix multiplication, coupled with the divide and conquer nature of this algorithm, should yield a highly parallelizable algorithm. 1. Introduction. Computation of all the eigenvalues and eigenvectors of a dense matrix is essential for solving problems in many fields. The everincreasing computational power available from modern supercomputers offers the potenti...
The bidiagonal singular values decomposition and Hamiltonian mechanics
 SIAM J. Num. Anal
, 1991
"... We consider computing the singular value decomposition of a bidiagonal matrixB. This problem arises in the singular value decomposition of a general matrix, and in the eigenproblem for a symmetric positive de nite tridiagonal matrix. We show that if the entries of B are known with high relative accu ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
We consider computing the singular value decomposition of a bidiagonal matrixB. This problem arises in the singular value decomposition of a general matrix, and in the eigenproblem for a symmetric positive de nite tridiagonal matrix. We show that if the entries of B are known with high relative accuracy, the singular values and singular vectors ofB will be determined to much higher accuracy than the standard perturbation theory suggests. We also show that the algorithm in [Demmel and Kahan] computes the singular vectors as well as the singular values to this accuracy. We also give a Hamiltonian interpretation of the algorithm and use di erential equation methods to prove many of the basic facts. The Hamiltonian approach suggests a way to use ows to predict the accumulation of error in other eigenvalue algorithms as well.
Numerically Stable Generation of Correlation Matrices and Their Factors
 BIT
, 2000
"... . Correlation matricessymmetric positive semidefinite matrices with unit diagonal are important in statistics and in numerical linear algebra. For simulation and testing it is desirable to be able to generate random correlation matrices with specified eigenvalues (which must be nonnegative an ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
. Correlation matricessymmetric positive semidefinite matrices with unit diagonal are important in statistics and in numerical linear algebra. For simulation and testing it is desirable to be able to generate random correlation matrices with specified eigenvalues (which must be nonnegative and sum to the dimension of the matrix). A popular algorithm of Bendel and Mickey takes a matrix having the specified eigenvalues and uses a finite sequence of Given rotations to introduce 1s on the diagonal. We give improved formulae for computing the rotations and prove that the resulting algorithm is numerically stable. We show by example that the formulae originally proposed, which are used in certain existing Fortran implementations, can lead to serious instability. We also show how to modify the algorithm to generate a rectangular matrix with columns of unit 2norm. Such a matrix represents a correlation matrix in factored form, which can be preferable to representing the matrix itself, ...