Results 1 
8 of
8
Computing Accurate Eigensystems of Scaled Diagonally Dominant Matrices
, 1980
"... When computing eigenvalues of sym metric matrices and singular values of general matrices in finite precision arithmetic we in general only expect to compute them with an error bound proportional to the product of machine precision and the norm of the matrix. In particular, we do not expect to comp ..."
Abstract

Cited by 92 (14 self)
 Add to MetaCart
When computing eigenvalues of sym metric matrices and singular values of general matrices in finite precision arithmetic we in general only expect to compute them with an error bound proportional to the product of machine precision and the norm of the matrix. In particular, we do not expect to compute tiny eigenvalues and singular values to high relative accuracy. There are some important classes of matrices where we can do much better, including bidiagonal matrices, scaled diagonally dominant matrices, and scaled diagonally dominant definite pencils. These classes include many graded matrices, and all sym metric positive definite matrices which can be consistently ordered (and thus all symmetric positive definite tridiagonal matrices). In particular, the singular values and eigenvalues are determined to high relative precision independent of their magnitudes, and there are algorithms to compute them this accurately. The eigenvectors are also determined more accurately than for general matrices, and may be computed more accurately as well. This work extends results of Kahan and Demmel for bidiagonal and tridiagonal matrices.
Constructing a Unitary Hessenberg Matrix from Spectral Data
, 1993
"... We consider the numerical construction of a unitary Hessenberg matrix from spectral data using an inverse QR algorithm. Any unitary upper Hessenberg matrix H with nonnegative subdiagonal elements can be represented by 2n  1 real parameters. This representation, which we refer to as the Schur parame ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
We consider the numerical construction of a unitary Hessenberg matrix from spectral data using an inverse QR algorithm. Any unitary upper Hessenberg matrix H with nonnegative subdiagonal elements can be represented by 2n  1 real parameters. This representation, which we refer to as the Schur parameterization of H; facilitates the development of efficient algorithms for this class of matrices. We show that a unitary upper Hessenberg matrix H with positive subdiagonal elements is determined by its eigenvalues and the eigenvalues of a rankone unitary perturbation of H: The eigenvalues of the perturbation strictly interlace the eigenvalues of H on the unit circle.
A Parallel Symmetric BlockTridiagonal DivideandConquer Algorithm
 University of Tennessee
, 2007
"... We present a parallel implementation of the blocktridiagonal divideandconquer algorithm that computes eigensolutions of symmetric blocktridiagonal matrices to reduced accuracy. In our implementation, we use mixed data/task parallelism to achieve data distribution and workload balance. Numerical ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
We present a parallel implementation of the blocktridiagonal divideandconquer algorithm that computes eigensolutions of symmetric blocktridiagonal matrices to reduced accuracy. In our implementation, we use mixed data/task parallelism to achieve data distribution and workload balance. Numerical tests show that our implementation is efficient, scalable and computes eigenpairs to prescribed accuracy. We compare the performance of our parallel eigensolver with that of the ScaLAPACK divideandconquer eigensolver on blocktridiagonal matrices. 1.
Highperformance solvers for dense Hermitian eigenproblems
 SIAM J. Sci. Comput
"... ar ..."
(Show Context)
An O(n log³ n) Algorithm for the Real Root and Symmetric Tridiagonal Eigenvalue Problems
, 1994
"... Given a univariate complex polynomial f(x) of degree n with rational coefficients expressed as a ratio of two integers ! 2 m , the root problem is to find all the roots of f(x) up to specified precision 2 \Gamma . In this paper we assume the arithmetic model for computation. We give an algori ..."
Abstract
 Add to MetaCart
Given a univariate complex polynomial f(x) of degree n with rational coefficients expressed as a ratio of two integers ! 2 m , the root problem is to find all the roots of f(x) up to specified precision 2 \Gamma . In this paper we assume the arithmetic model for computation. We give an algorithm for the real root problem: where all the roots of the polynomial are real. Our real root algorithm has time cost of O(n log 2 n(log n + log b)); where b = m + . Our arithmetic time cost is thus O(n log³ n) even in the case of high precision b n O(1) . This is within a small polylog factor of optimality, thus (perhaps surprisingly) upper bounding the arithmetic complexity of the real root problem to nearly the same as basic arithmetic operations on polynomials. The symmetric tridiagonal problem is given a n \Theta n symmetric tridiagona matrix, with 3n nonzero rational entries each expressed as a ratio of two integers ! 2 m , find all the eigenvalues up to specified pr...
An Efficient Algorithm for the Real Root and Symmetric Tridiagonal Eigenvalue Problems
, 1999
"... Given a univariate complex polynomial f(x) of degree n with rational coefficients expressed as a ratio of two integers ! 2 m , the root problem is to find all the roots of f(x) up to specified precision 2 \Gamma . In this paper we assume the arithmetic model for computation. We give an improv ..."
Abstract
 Add to MetaCart
Given a univariate complex polynomial f(x) of degree n with rational coefficients expressed as a ratio of two integers ! 2 m , the root problem is to find all the roots of f(x) up to specified precision 2 \Gamma . In this paper we assume the arithmetic model for computation. We give an improved algorithm for finding a wellisolated splitting interval and for fast root proximity verification. Using these results, we give an algorithm for the real root problem: where all the roots of the polynomial are real. Our real root algorithm has time cost of O(n log 2 n(log n + log b)); where b = m + . Our arithmetic time cost is thus O(n log 3 n) even in the case of high precision b n O(1) . This is within a small polylog factor of optimality, thus (perhaps surprisingly) upper bounding the arithmetic complexity of the real root problem to nearly the same as basic arithmetic operations on polynomials. The symmetric tridiagonal problem is: given an n \Theta n symmetric tridiago...
Efficient Parallel Computation of the Characteristic Polynomial of a Sparse, Separable Matrix
, 1999
"... This paper is concerned with the problem of computing the characteristic polynomial of a matrix. In a large number of applications, the matrices are symmetric and sparse: with O(n) nonzero entries. The problem has an efficient sequential solution in this case, requiring O(n²) work by use of the ..."
Abstract
 Add to MetaCart
This paper is concerned with the problem of computing the characteristic polynomial of a matrix. In a large number of applications, the matrices are symmetric and sparse: with O(n) nonzero entries. The problem has an efficient sequential solution in this case, requiring O(n²) work by use of the Sparse Lanczos method. A major remaining open question is: to find a polylog time parallel algorithm with matching work bounds. Unfortunately, the sparse Lanczos method cannot be parallelized to faster than time\Omega\Gamma n) using n processors. Let M(n) be the processor bound to multiply two n \Theta n matrices in O(log n) parallel time. Giesbrecht [G 95] gave the best previous polylog time