Results 1  10
of
71
Accurate Singular Values of Bidiagonal Matrices
 SIAM J. SCI. STAT. COMPUT
, 1990
"... Computing the singular values of a bidiagonal matrix is the fin al phase of the standard algow rithm for the singular value decomposition of a general matrix. We present a new algorithm hich computes all the singular values of a bidiagonal matrix to high relative accuracy independent of their magni ..."
Abstract

Cited by 100 (17 self)
 Add to MetaCart
Computing the singular values of a bidiagonal matrix is the fin al phase of the standard algow rithm for the singular value decomposition of a general matrix. We present a new algorithm hich computes all the singular values of a bidiagonal matrix to high relative accuracy independent of their magnitudes. In contrast, the standard algorithm for bidiagonal matrices may compute small singular values with no relative accuracy at all. Numerical experiments show that the new algorithm is comparable in speed to the standard algorithm , and frequently faster.
Computing the Singular Value Decomposition with High Relative Accuracy
 Linear Algebra Appl
, 1997
"... We analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the a ..."
Abstract

Cited by 55 (12 self)
 Add to MetaCart
We analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the absolute accuracy provided by conventional backward stable algorithms, whichin general only guarantee correct digits in the singular values with large enough magnitudes. It is of interest to compute the tiniest singular values with several correct digits, because in some cases, such as #nite element problems and quantum mechanics, it is the smallest singular values that havephysical meaning, and should be determined accurately by the data. Many recent papers have identi#ed special classes of matrices where high relative accuracy is possible, since it is not possible in general. The perturbation theory and algorithms for these matrix classes have been quite di#erent, motivating us to seek a co...
Orthogonal Eigenvectors and Relative Gaps
, 2002
"... Let LDLt be the triangular factorization of a real symmetric n\Theta n tridiagonal matrix so that L is a unit lower bidiagonal matrix, D is diagonal. Let (*; v) be an eigenpair, * 6 = 0, with the property that both * and v are determined to high relative accuracy by the parameters in L and D. Suppo ..."
Abstract

Cited by 38 (16 self)
 Add to MetaCart
Let LDLt be the triangular factorization of a real symmetric n\Theta n tridiagonal matrix so that L is a unit lower bidiagonal matrix, D is diagonal. Let (*; v) be an eigenpair, * 6 = 0, with the property that both * and v are determined to high relative accuracy by the parameters in L and D. Suppose also that the relative gap between * and its nearest neighbor _ in the spectrum exceeds 1=n; nj * \Gamma _j? j*j. This paper presents a new O(n) algorithm and a proof that, in the presence of roundoff error, the algorithm computes an approximate eigenvector ^v that is accurate to working precision: j sin "(v; ^v)j = O(n"), where " is the roundoff unit. It follows that ^v is numerically orthogonal to all the other eigenvectors. This result forms part of a program to compute numerically orthogonal eigenvectors without resorting to the GramSchmidt process. The contents of this paper provide a highlevel description and theoretical justification for LAPACK (version 3.0) subroutine DLAR1V.
Numerical Methods for Simultaneous Diagonalization
 SIAM J. Matrix Anal. Applicat
, 1993
"... We present a Jacobilike algorithm for simultaneous diagonalization of commuting pairs of complex normal matrices by unitary similarity transformations. The algorithm uses a sequence of similarity transformations by elementary complex rotations to drive the offdiagonal entries to zero. We show th ..."
Abstract

Cited by 37 (0 self)
 Add to MetaCart
We present a Jacobilike algorithm for simultaneous diagonalization of commuting pairs of complex normal matrices by unitary similarity transformations. The algorithm uses a sequence of similarity transformations by elementary complex rotations to drive the offdiagonal entries to zero. We show that its asymptotic convergence rate is quadratic and that it is numerically stable. It preserves the special structure of real matrices, quaternion matrices and real symmetric matrices.
NEW FAST AND ACCURATE JACOBI SVD ALGORITHM: II
, 2002
"... This paper presents new implementation of one–sided Jacobi SVD for triangular matrices and its use as the core routine in a new preconditioned Jacobi SVD algorithm, recently proposed by the authors. New pivot strategy exploits the triangular form and uses the fact that the input triangular matrix i ..."
Abstract

Cited by 32 (3 self)
 Add to MetaCart
This paper presents new implementation of one–sided Jacobi SVD for triangular matrices and its use as the core routine in a new preconditioned Jacobi SVD algorithm, recently proposed by the authors. New pivot strategy exploits the triangular form and uses the fact that the input triangular matrix is the result of rank revealing QR factorization. If used in the preconditioned Jacobi SVD algorithm, it delivers superior performance leading to the currently fastest method for computing SVD decomposition with high relative accuracy. Furthermore, the efficiency of the new algorithm is comparable to the less accurate bidiagonalization based methods. The paper also discusses underflow issues in floating point implementation, and shows how to use perturbation theory to fix the imperfectness of machine arithmetic on some systems.
Relatively Robust Representations of Symmetric Tridiagonals
 LINEAR ALGEBRA AND APPL
, 1999
"... Let LDL t be the triangular factorization of a symmetric tridiagonal matrix T I . Small relative uncertainties in the nontrivial entries of L and D may be represented by diagonal scaling matrices 1 and 2 ; LDL t ! 2 L 1 D 1 L t 2 . The effect of 2 on the eigenvalues i is benign. In this paper ..."
Abstract

Cited by 30 (14 self)
 Add to MetaCart
Let LDL t be the triangular factorization of a symmetric tridiagonal matrix T I . Small relative uncertainties in the nontrivial entries of L and D may be represented by diagonal scaling matrices 1 and 2 ; LDL t ! 2 L 1 D 1 L t 2 . The effect of 2 on the eigenvalues i is benign. In this paper we study the inner perturbations induced by 1 . Suitable condition numbers are introduced and, with the help of orthogonal polynomial theory, illuminating bounds on these condition numbers are obtained. If is close to, and on the `wrong' side of, a Ritz value then there will be large element growth (kLjDjL t k kT Ik) and some of the condition numbers will be large. It is shown that element growth is the only cause of large condition numbers. In particular there exist many values on either side of interior clusters of close eigenvalues such that T I = LDL t , with modest element growth, and the entries of L and D determine the small eigenvalues to high relative a...
A New O(n²) Algorithm for the Symmetric Tridiagonal Eigenvalue/Eigenvector Problem
 In progress
, 1997
"... ..."
Relative perturbation theory: (i) eigenvalue and singular value variations
 SIAM J. Matrix Anal. Appl
, 1998
"... The classical perturbation theory for matrix eigenvalue and singular value problems provides bounds on the absolute di erences between approximate eigenvalues (singular values) and the true eigenvalues (singular values) of a matrix. These bounds may be bad news for small eigenvalues (singular values ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
The classical perturbation theory for matrix eigenvalue and singular value problems provides bounds on the absolute di erences between approximate eigenvalues (singular values) and the true eigenvalues (singular values) of a matrix. These bounds may be bad news for small eigenvalues (singular values), which thereby su er worse relative uncertainty than large ones. However, there are situations where even small eigenvalues are determined to high relative accuracy by the data, much more accurately than the classical perturbation theory would indicate. In this paper, we study how eigenvalues of a matrix A change when it is perturbed toe A = D1 AD2 and how singular values of a (nonsquare) matrix B change when it is perturbed toe B = D1 BD2, where D1 and D2 are assumed to be close to unitary matrices of suitable dimensions. It is proved that under these kinds of perturbations, small eigenvalues (singular values) su er relative changes no worse than large eigenvalues (singular values). Wehave been able to extend many wellknown perturbation theorems, including Ho manWielandt theorem and WeylLidskii theorem. As applications, we obtained bounds for perturbations
Relative perturbation theory: (ii) eigenspace and singular subspace variations
 SIAM J. Matrix Anal. Appl
, 1998
"... The classical perturbation theory for Hermitian matrix eigenvalue and singular value problems provides bounds on invariant subspace variations that are proportional to the reciprocals of absolute gaps between subsets of spectra or subsets of singular values. These bounds may be bad news for invarian ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
The classical perturbation theory for Hermitian matrix eigenvalue and singular value problems provides bounds on invariant subspace variations that are proportional to the reciprocals of absolute gaps between subsets of spectra or subsets of singular values. These bounds may be bad news for invariant subspaces corresponding to clustered eigenvalues or clustered singular values of much smaller magnitudes than the norms of matrices under considerations when some of these clustered eigenvalues or clustered singular values are perfectly relatively distinguishable from the rest. In this paper, we consider how eigenspaces of a Hermitian matrix A change when it is perturbed toe A = D AD and how singular values of a (nonsquare) matrix B change when it is perturbed toe B = D1 BD2, where D, D1 and D2 are assumed to be close to identity matrices of suitable dimensions, or either D1 or D2 close to some unitary matrix. It is proved that under these kinds of perturbations, the change of invariant subspaces are proportional to the reciprocals of relative gaps between subsets of spectra or subsets of singular values. We have been able to extend wellknown DavisKahan