Results 1  10
of
82
Applied Numerical Linear Algebra
 Society for Industrial and Applied Mathematics
, 1997
"... We survey general techniques and open problems in numerical linear algebra on parallel architectures. We rst discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing e cient algorithms. We illustrate ..."
Abstract

Cited by 526 (26 self)
 Add to MetaCart
We survey general techniques and open problems in numerical linear algebra on parallel architectures. We rst discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing e cient algorithms. We illustrate these principles using current architectures and software systems, and by showing how one would implement matrix multiplication. Then, we present direct and iterative algorithms for solving linear systems of equations, linear least squares problems, the symmetric eigenvalue problem, the nonsymmetric eigenvalue problem, and the singular value decomposition. We consider dense, band and sparse matrices.
On the Early History of the Singular Value Decomposition
, 1992
"... This paper surveys the contributions of five mathematicians  Eugenio Beltrami (18351899), Camille Jordan (18381921), James Joseph Sylvester (18141897), Erhard Schmidt (18761959), and Hermann Weyl (18851955)  who were responsible for establishing the existence of the singular value de ..."
Abstract

Cited by 82 (1 self)
 Add to MetaCart
This paper surveys the contributions of five mathematicians  Eugenio Beltrami (18351899), Camille Jordan (18381921), James Joseph Sylvester (18141897), Erhard Schmidt (18761959), and Hermann Weyl (18851955)  who were responsible for establishing the existence of the singular value decomposition and developing its theory.
Computing Accurate Eigensystems of Scaled Diagonally Dominant Matrices
, 1980
"... When computing eigenvalues of sym metric matrices and singular values of general matrices in finite precision arithmetic we in general only expect to compute them with an error bound proportional to the product of machine precision and the norm of the matrix. In particular, we do not expect to comp ..."
Abstract

Cited by 80 (14 self)
 Add to MetaCart
When computing eigenvalues of sym metric matrices and singular values of general matrices in finite precision arithmetic we in general only expect to compute them with an error bound proportional to the product of machine precision and the norm of the matrix. In particular, we do not expect to compute tiny eigenvalues and singular values to high relative accuracy. There are some important classes of matrices where we can do much better, including bidiagonal matrices, scaled diagonally dominant matrices, and scaled diagonally dominant definite pencils. These classes include many graded matrices, and all sym metric positive definite matrices which can be consistently ordered (and thus all symmetric positive definite tridiagonal matrices). In particular, the singular values and eigenvalues are determined to high relative precision independent of their magnitudes, and there are algorithms to compute them this accurately. The eigenvectors are also determined more accurately than for general matrices, and may be computed more accurately as well. This work extends results of Kahan and Demmel for bidiagonal and tridiagonal matrices.
Computing the Singular Value Decomposition with High Relative Accuracy
 Linear Algebra Appl
, 1997
"... We analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the a ..."
Abstract

Cited by 55 (12 self)
 Add to MetaCart
We analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the absolute accuracy provided by conventional backward stable algorithms, whichin general only guarantee correct digits in the singular values with large enough magnitudes. It is of interest to compute the tiniest singular values with several correct digits, because in some cases, such as #nite element problems and quantum mechanics, it is the smallest singular values that havephysical meaning, and should be determined accurately by the data. Many recent papers have identi#ed special classes of matrices where high relative accuracy is possible, since it is not possible in general. The perturbation theory and algorithms for these matrix classes have been quite di#erent, motivating us to seek a co...
Numerical Computation of an Analytic Singular Value Decomposition of a Matrix Valued Function
 Numer. Math
, 1991
"... This paper extends the singular value decomposition to a path of matrices E(t). An analytic singular value decomposition of a path of matrices E(t) is an analytic path of factorizations E(t) = X(t)S(t)Y (t) T where X(t) and Y (t) are orthogonal and S(t) is diagonal. To maintain differentiability ..."
Abstract

Cited by 44 (6 self)
 Add to MetaCart
This paper extends the singular value decomposition to a path of matrices E(t). An analytic singular value decomposition of a path of matrices E(t) is an analytic path of factorizations E(t) = X(t)S(t)Y (t) T where X(t) and Y (t) are orthogonal and S(t) is diagonal. To maintain differentiability the diagonal entries of S(t) are allowed to be either positive or negative and to appear in any order. This paper investigates existence and uniqueness of analytic SVD's and develops an algorithm for computing them. We show that a real analytic path E(t) always admits a real analytic SVD, a fullrank, smooth path E(t) with distinct singular values admits a smooth SVD. We derive a differential equation for the left factor, develop Eulerlike and extrapolated Eulerlike numerical methods for approximating an analytic SVD and prove that the Eulerlike method converges. 1 Introduction A singular value decomposition (SVD) of a constant matrix E 2 R m\Thetan , m n, is a factorization E = U...
Orthogonal Eigenvectors and Relative Gaps
, 2002
"... Let LDLt be the triangular factorization of a real symmetric n\Theta n tridiagonal matrix so that L is a unit lower bidiagonal matrix, D is diagonal. Let (*; v) be an eigenpair, * 6 = 0, with the property that both * and v are determined to high relative accuracy by the parameters in L and D. Suppo ..."
Abstract

Cited by 38 (16 self)
 Add to MetaCart
Let LDLt be the triangular factorization of a real symmetric n\Theta n tridiagonal matrix so that L is a unit lower bidiagonal matrix, D is diagonal. Let (*; v) be an eigenpair, * 6 = 0, with the property that both * and v are determined to high relative accuracy by the parameters in L and D. Suppose also that the relative gap between * and its nearest neighbor _ in the spectrum exceeds 1=n; nj * \Gamma _j? j*j. This paper presents a new O(n) algorithm and a proof that, in the presence of roundoff error, the algorithm computes an approximate eigenvector ^v that is accurate to working precision: j sin "(v; ^v)j = O(n"), where " is the roundoff unit. It follows that ^v is numerically orthogonal to all the other eigenvectors. This result forms part of a program to compute numerically orthogonal eigenvectors without resorting to the GramSchmidt process. The contents of this paper provide a highlevel description and theoretical justification for LAPACK (version 3.0) subroutine DLAR1V.
Multiple representations to compute orthogonal eigenvectors of symmetric tridiagonal matrices
 Linear Algebra and Appl
, 2004
"... Abstract In this paper we present an O(nk) procedure, Algorithm MR 3, for computing k eigenvectors of an n \Theta n symmetric tridiagonal matrix T. A salient feature of the algorithm is that a number of different LDL t products (L unit lower triangular, D diagonal) are computed. In exact arithmetic ..."
Abstract

Cited by 35 (14 self)
 Add to MetaCart
Abstract In this paper we present an O(nk) procedure, Algorithm MR 3, for computing k eigenvectors of an n \Theta n symmetric tridiagonal matrix T. A salient feature of the algorithm is that a number of different LDL t products (L unit lower triangular, D diagonal) are computed. In exact arithmetic each LDL t is a factorization of a translate of T. We call the various LDL t
Relatively Robust Representations of Symmetric Tridiagonals
 LINEAR ALGEBRA AND APPL
, 1999
"... Let LDL t be the triangular factorization of a symmetric tridiagonal matrix T I . Small relative uncertainties in the nontrivial entries of L and D may be represented by diagonal scaling matrices 1 and 2 ; LDL t ! 2 L 1 D 1 L t 2 . The effect of 2 on the eigenvalues i is benign. In this paper ..."
Abstract

Cited by 30 (14 self)
 Add to MetaCart
Let LDL t be the triangular factorization of a symmetric tridiagonal matrix T I . Small relative uncertainties in the nontrivial entries of L and D may be represented by diagonal scaling matrices 1 and 2 ; LDL t ! 2 L 1 D 1 L t 2 . The effect of 2 on the eigenvalues i is benign. In this paper we study the inner perturbations induced by 1 . Suitable condition numbers are introduced and, with the help of orthogonal polynomial theory, illuminating bounds on these condition numbers are obtained. If is close to, and on the `wrong' side of, a Ritz value then there will be large element growth (kLjDjL t k kT Ik) and some of the condition numbers will be large. It is shown that element growth is the only cause of large condition numbers. In particular there exist many values on either side of interior clusters of close eigenvalues such that T I = LDL t , with modest element growth, and the entries of L and D determine the small eigenvalues to high relative a...
MAPC: A library for Efficient and Exact Manipulation of Algebraic Points and Curves
"... We present MAPC, a library for exact representation of geometric objects  specifically points and algebraic curves in the plane. Our library makes use of several new algorithms, which we present here, including methods for nding the sign of a determinant, finding intersections between two curves, ..."
Abstract

Cited by 28 (8 self)
 Add to MetaCart
We present MAPC, a library for exact representation of geometric objects  specifically points and algebraic curves in the plane. Our library makes use of several new algorithms, which we present here, including methods for nding the sign of a determinant, finding intersections between two curves, and breaking a curve into monotonic segments. These algorithms are used to speed up the underlying computations. The library provides C++ classes that can be used to easily instantiate, manipulate, and perform queries on points and curves in the plane. The point classes can be used to represent points known in a variety of ways (e.g. as exact rational coordinates or algebraic numbers) in a unified manner. The curve class can be used to represent a portion of an algebraic curve. We have used MAPC for applications dealing with algebraic points and curves, including sorting points along a curve, computing arrangement of curves, medial axis computations, and boundary evaluation on curved primitives. As compared to earlier algorithms and implementations utilizing exact arithmetic, our library is able to achieve more than an order of magnitude improvement in performance.