Results 1  10
of
17
On the Early History of the Singular Value Decomposition
, 1992
"... This paper surveys the contributions of five mathematicians  Eugenio Beltrami (18351899), Camille Jordan (18381921), James Joseph Sylvester (18141897), Erhard Schmidt (18761959), and Hermann Weyl (18851955)  who were responsible for establishing the existence of the singular value de ..."
Abstract

Cited by 82 (1 self)
 Add to MetaCart
This paper surveys the contributions of five mathematicians  Eugenio Beltrami (18351899), Camille Jordan (18381921), James Joseph Sylvester (18141897), Erhard Schmidt (18761959), and Hermann Weyl (18851955)  who were responsible for establishing the existence of the singular value decomposition and developing its theory.
Computing Accurate Eigensystems of Scaled Diagonally Dominant Matrices
, 1980
"... When computing eigenvalues of sym metric matrices and singular values of general matrices in finite precision arithmetic we in general only expect to compute them with an error bound proportional to the product of machine precision and the norm of the matrix. In particular, we do not expect to comp ..."
Abstract

Cited by 80 (14 self)
 Add to MetaCart
When computing eigenvalues of sym metric matrices and singular values of general matrices in finite precision arithmetic we in general only expect to compute them with an error bound proportional to the product of machine precision and the norm of the matrix. In particular, we do not expect to compute tiny eigenvalues and singular values to high relative accuracy. There are some important classes of matrices where we can do much better, including bidiagonal matrices, scaled diagonally dominant matrices, and scaled diagonally dominant definite pencils. These classes include many graded matrices, and all sym metric positive definite matrices which can be consistently ordered (and thus all symmetric positive definite tridiagonal matrices). In particular, the singular values and eigenvalues are determined to high relative precision independent of their magnitudes, and there are algorithms to compute them this accurately. The eigenvectors are also determined more accurately than for general matrices, and may be computed more accurately as well. This work extends results of Kahan and Demmel for bidiagonal and tridiagonal matrices.
Computing the Singular Value Decomposition with High Relative Accuracy
 Linear Algebra Appl
, 1997
"... We analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the a ..."
Abstract

Cited by 55 (12 self)
 Add to MetaCart
We analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the absolute accuracy provided by conventional backward stable algorithms, whichin general only guarantee correct digits in the singular values with large enough magnitudes. It is of interest to compute the tiniest singular values with several correct digits, because in some cases, such as #nite element problems and quantum mechanics, it is the smallest singular values that havephysical meaning, and should be determined accurately by the data. Many recent papers have identi#ed special classes of matrices where high relative accuracy is possible, since it is not possible in general. The perturbation theory and algorithms for these matrix classes have been quite di#erent, motivating us to seek a co...
A New O(n²) Algorithm for the Symmetric Tridiagonal Eigenvalue/Eigenvector Problem
 In progress
, 1997
"... ..."
A Survey of Componentwise Perturbation Theory in Numerical Linear Algebra
 in Mathematics of Computation 19431993: A Half Century of Computational Mathematics
, 1994
"... . Perturbation bounds in numerical linear algebra are traditionally derived and expressed using norms. Norm bounds cannot reflect the scaling or sparsity of a problem and its perturbation, and so can be unduly weak. If the problem data and its perturbation are measured componentwise, much smaller an ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
. Perturbation bounds in numerical linear algebra are traditionally derived and expressed using norms. Norm bounds cannot reflect the scaling or sparsity of a problem and its perturbation, and so can be unduly weak. If the problem data and its perturbation are measured componentwise, much smaller and more revealing bounds can be obtained. A survey is given of componentwise perturbation theory in numerical linear algebra, covering linear systems, the matrix inverse, matrix factorizations, the least squares problem, and the eigenvalue and singular value problems. Most of the results described have been published in the last five years. Our hero is the intrepid, yet sensitive matrix A. Our villain is E, who keeps perturbing A. When A is perturbed he puts on a crumpled hat: e A = A+E. G. W. Stewart and J.G. Sun, Matrix Perturbation Theory (1990) 1. Introduction Matrix analysis would not have developed into the vast subject it is today without the concept of representing a matrix by ...
Relative Perturbation Results for Matrix Eigenvalues and Singular Values
, 1998
"... this paper is for you! We present error bounds for eigenvalues and singular values that can be much tighter than the traditional bounds, especially when these values have ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
this paper is for you! We present error bounds for eigenvalues and singular values that can be much tighter than the traditional bounds, especially when these values have
Relative Perturbation Results For Eigenvalues And Eigenvectors Of Diagonalisable Matrices
, 1996
"... . Let and x be a perturbed eigenpair of a diagonalisable matrix A. The problem is to bound the error in and x. We present one absolute perturbation bound and two relative perturbation bounds. The absolute perturbation bound implies that the condition number for x is the norm of an orthogonal ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
. Let and x be a perturbed eigenpair of a diagonalisable matrix A. The problem is to bound the error in and x. We present one absolute perturbation bound and two relative perturbation bounds. The absolute perturbation bound implies that the condition number for x is the norm of an orthogonal projection of the reduced resolvent at . This condition number can be a lot less pessimistic than the traditional one, which is derived from a firstorder analysis. A further upper bound leads to an extension of Davis and Kahan's sin ` Theorem from Hermitian to diagonalisable matrices. The two relative perturbation bounds assume that and x are an exact eigenpair of a perturbed matrix D1AD2 , where D1 and D2 are nonsingular, but D1AD2 is not necessarily diagonalisable. We derive a bound on the relative error in and a sin ` theorem based on a relative eigenvalue separation. The perturbation bounds contain both the deviation of D 1 and D2 from similarity and the deviation of D2 from iden...
Componentwise Perturbation Theory for Linear Systems with Multiple RightHand Sides
, 1992
"... Existing definitions of componentwise backward error and componentwise condition number for linear systems are extended to systems with multiple righthand sides and to a general class of componentwise measures of perturbations involving Hölder pnorms. It is shown that for a system of order n with ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Existing definitions of componentwise backward error and componentwise condition number for linear systems are extended to systems with multiple righthand sides and to a general class of componentwise measures of perturbations involving Hölder pnorms. It is shown that for a system of order n with r righthand sides, the componentwise backward error can be computed by finding the minimum pnorm solutions to n underdetermined linear systems, and an explicit expression is obtained in the case r = 1. A perturbation bound is derived, and from this the componentwise condition number is obtained to within a multiplicative constant. Applications of the results are discussed to invariant subspace computations, quasiNewton methods based on multiple secant equations, and an inverse ODE problem.
On the error analysis and implementation of some eigenvalue decomposition and singular value decomposition algorithms
, 1996
"... Many algorithms exist for computing the symmetric eigendecomposition, the singular value decomposition and the generalized singular value decomposition. In this thesis, we present several new algorithms and improvements on old algorithms, analyzing them with respect to their speed, accuracy, and sto ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Many algorithms exist for computing the symmetric eigendecomposition, the singular value decomposition and the generalized singular value decomposition. In this thesis, we present several new algorithms and improvements on old algorithms, analyzing them with respect to their speed, accuracy, and storage requirements. We rst discuss the variations on the bisection algorithm for nding eigenvalues of symmetric tridiagonal matrices. We show the challenges in implementing a correct algorithm with oating point arithmetic. We show how reasonable looking but incorrect implementations can fail. We carefully de ne correctness, and present several implementations that we rigorously prove correct. We then discuss a fast implementation of bisection using parallel pre x. We show many numerical examples of the instability of this algorithm, and then discuss its forward error and backward error analysis. We also discuss possible ways to stabilize it by using iterative re nement. Finally, we discuss how to use a divideandconquer algorithm to compute the singular value decomposition and solve the linear least squares problem, and how to implement