Results 1  10
of
18
Orthogonal Eigenvectors and Relative Gaps
, 2002
"... Let LDLt be the triangular factorization of a real symmetric n\Theta n tridiagonal matrix so that L is a unit lower bidiagonal matrix, D is diagonal. Let (*; v) be an eigenpair, * 6 = 0, with the property that both * and v are determined to high relative accuracy by the parameters in L and D. Suppo ..."
Abstract

Cited by 53 (15 self)
 Add to MetaCart
(Show Context)
Let LDLt be the triangular factorization of a real symmetric n\Theta n tridiagonal matrix so that L is a unit lower bidiagonal matrix, D is diagonal. Let (*; v) be an eigenpair, * 6 = 0, with the property that both * and v are determined to high relative accuracy by the parameters in L and D. Suppose also that the relative gap between * and its nearest neighbor _ in the spectrum exceeds 1=n; nj * \Gamma _j? j*j. This paper presents a new O(n) algorithm and a proof that, in the presence of roundoff error, the algorithm computes an approximate eigenvector ^v that is accurate to working precision: j sin &quot;(v; ^v)j = O(n&quot;), where &quot; is the roundoff unit. It follows that ^v is numerically orthogonal to all the other eigenvectors. This result forms part of a program to compute numerically orthogonal eigenvectors without resorting to the GramSchmidt process. The contents of this paper provide a highlevel description and theoretical justification for LAPACK (version 3.0) subroutine DLAR1V.
Fernando's Solution to Wilkinson's Problem: an Application of Double Factorization
 Linear Algebra and Appl
, 1996
"... Suppose that one knows a very accurate approximation oe to an eigenvalue of a symmetric tridiagonal matrix T . A good way to approximate the eigenvector x is to discard an appropriate equation, say the rth, from the system (T \Gamma oeI)x = 0 and then to solve the resulting underdetermined system i ..."
Abstract

Cited by 50 (19 self)
 Add to MetaCart
(Show Context)
Suppose that one knows a very accurate approximation oe to an eigenvalue of a symmetric tridiagonal matrix T . A good way to approximate the eigenvector x is to discard an appropriate equation, say the rth, from the system (T \Gamma oeI)x = 0 and then to solve the resulting underdetermined system in any of several stable ways. However the output x can be completely inaccurate if r is chosen poorly and in the absence of a quick and reliable way to choose r this method has lain neglected for over 35 years. Experts in boundary value problems have known about the special structure of the inverse of a tridiagonal matrix since the 1960s and their double triangular factorization technique (down and up) gives directly the redundancy of each equation and so reveals the set of good choices for r. The relation of double factorization to the eigenvector algorithm of Godunov and his collaborates is described in Section 4. The results extend to band matrices (Section 7) and to zero entries in eigen...
The EhrlichAberth Method For The Nonsymmetric Tridiagonal Eigenvalue Problem
 MANCHESTER CENTRE FOR COMPUTATIONAL MATHEMATICS
, 2003
"... An algorithm based on the EhrlichAberth iteration is presented for the computation of the zeros of p(#) = det(T  #I), where T is an irreducible tridiagonal matrix. The algorithm requires the evaluation of p(#)/p (#) = 1/trace(T  #I) , which is done here by exploiting the QR factorization o ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
An algorithm based on the EhrlichAberth iteration is presented for the computation of the zeros of p(#) = det(T  #I), where T is an irreducible tridiagonal matrix. The algorithm requires the evaluation of p(#)/p (#) = 1/trace(T  #I) , which is done here by exploiting the QR factorization of T  #I and the semiseparable structure of (T  #I) . Two choices of the initial approximations are considered; the most effective relies on a divideandconquer strategy, and some results motivating this strategy are given. A Fortran 95 module implementing the algorithm is provided and numerical experiments that confirm the e#ectiveness and the robustness of the approach are presented. In particular, comparisons with the LAPACK subroutine dhseqr show that our algorithm is faster for large dimensions.
Current Inverse Iteration Software Can Fail
 BIT
, 1998
"... Inverse Iteration is widely used to compute the eigenvectors of a matrix once accurate eigenvalues are known. We discuss various issues involved in any implementation of inverse iteration for real, symmetric matrices. Current implementations resort to reorthogonalization when eigenvalues agree to mo ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
(Show Context)
Inverse Iteration is widely used to compute the eigenvectors of a matrix once accurate eigenvalues are known. We discuss various issues involved in any implementation of inverse iteration for real, symmetric matrices. Current implementations resort to reorthogonalization when eigenvalues agree to more than three digits relative to the norm. Such reorthogonalization can have unexpected consequences. Indeed, as we show in this paper, the implementations in EISPACK [18] and LAPACK [1] may fail. We illustrate with both theoretical and empirical failures. Keywords : Inverse iteration, symmetric, tridiagonal matrix, eigenvalues, eigenvectors. AMS subject classification : 15A18, 65F15, 65F25. 1 Introduction Given an eigenvalue of the matrix A, a corresponding eigenvector is defined as a nonzero solution of the homogeneous system (A \Gamma I)v = 0: However in a computer implementation we can only expect, in general, to have an approximation oe to . In such a case, we may attempt to compu...
Application of a New Algorithm for the Symmetric Eigenproblem to Computational Quantum Chemistry
 In Proceedings of the Eighth SIAM Conference on Parallel Processing for Scientific Computing. SIAM
, 1997
"... We present performance results of a new method for computing eigenvectors of a real symmetric tridiagonal matrix. The method is a variation of inverse iteration and can in most cases substantially reduce the time required to produce orthogonal eigenvectors. Our implementation of this algorithm has b ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
(Show Context)
We present performance results of a new method for computing eigenvectors of a real symmetric tridiagonal matrix. The method is a variation of inverse iteration and can in most cases substantially reduce the time required to produce orthogonal eigenvectors. Our implementation of this algorithm has been quite effective in solving "degenerate" eigenproblems in computational chemistry. On a biphenyl example, the implementation is 46 times faster than an earlier PeIGS 2.0 code using 1 processor of the IBM SP. It reduces the time for computing eigenvectors of this 966 \Theta 966 matrix to under 0.15 seconds using 64 processors of the IBM SP. We present performance results for calculations from the SGI PowerChallenge and the IBM SP. 1 Introduction In computational chemistry, the solution of dense real symmetric standard eigensystem problems is often required to obtain energy states of Hamiltonians. The computational chemists require all or a certain fraction of the spectrum and its associat...
Reliable Computation of the Condition Number of a Tridiagonal Matrix in O(n) Time
, 1997
"... We present one more algorithm to compute the condition number (for inversion) of a n \Theta n tridiagonal matrix J in O(n) time. Previous O(n) algorithms for this task given by Higham in [17] are based on the tempting compact representation of the upper (lower) triangle of J \Gamma1 as the upper (lo ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
We present one more algorithm to compute the condition number (for inversion) of a n \Theta n tridiagonal matrix J in O(n) time. Previous O(n) algorithms for this task given by Higham in [17] are based on the tempting compact representation of the upper (lower) triangle of J \Gamma1 as the upper (lower) triangle of a rankone matrix. However, they suffer from severe overflow and underflow problems, especially on diagonally dominant matrices. Our new algorithm avoids these problems and is as efficient as the earlier algorithms. Keywords. Tridiagonal matrix, inverse, condition number, norm, overflow, underflow. AMS subject classifications. 15A12, 15A60, 65F35. 1 Introduction When solving a linear system Bx = r we are interested in knowing how accurate the solution is. This question is often answered by showing that the solution computed in finite precision is exact for a matrix "close" to B, and then measuring how sensitive the solution is to a small perturbation. The condition numb...
A Computational Approach for Fluid Queues Driven By Truncated BirthDeath Processes
, 1999
"... In this paper, we analyze uid queues driven by truncated birthdeath processes with general birth and death rates. We compute the equilibrium distribution of the content of the fluid buffer by providing ecient numerical procedures to compute the eigenvalues and the eigenvectors of the associated rea ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
In this paper, we analyze uid queues driven by truncated birthdeath processes with general birth and death rates. We compute the equilibrium distribution of the content of the fluid buffer by providing ecient numerical procedures to compute the eigenvalues and the eigenvectors of the associated real signasymmetric tridiagonal matrix. We illustrate the eectiveness of the procedures through tables and graphs.
Accurate BABE Factorisation of Tridiagonal Matrices for Eigenproblems
"... Recently, Fernando successfully resurrected a classical method for computing eigenvectors which goes back to the times of Cauchy. This algorithm has been in the doldrums for nearly fofty years because of a fundamental difficulty highlighted by Wilkinson. The algorithm is based on the solution of a n ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Recently, Fernando successfully resurrected a classical method for computing eigenvectors which goes back to the times of Cauchy. This algorithm has been in the doldrums for nearly fofty years because of a fundamental difficulty highlighted by Wilkinson. The algorithm is based on the solution of a nearly homogeneous system of equations (J; I)z = k ()ek � zk = 1 for the approximate eigenvector z where is an eigenvalue shift, k ( ) is a scalar and ek is a unit vector. The best (minimal residual) approximation for z is obtained by choosing the k, 1 k n, for which j k ()j is minimal and tiny. If the LDU factorisation is computed from the top of the matrix J; I and the UDL factorisation from the bottom then the residual k ()appear as the pivot where these two factorisations meet. We study the properties of this BABE (burn at both ends) factorisation which are closely related to the properties of LDU and UDL factorisations. We show that LDU, UDL and BABE factorisations possess mixed stability with tiny relative perturbations. However, it is demonstrated
A Parallel Eigensolver for Dense Symmetric Matrices based on Multiple Relatively Robust Representations
 SIAM J. Sci. Comput
, 2005
"... We present a new parallel algorithm for the dense symmetric eigenvalue/eigenvector problem that is based upon the tridiagonal eigensolver, Algorithm MR 3, recently developed by Dhillon and Parlett. Algorithm MR 3 has a complexity of O(n 2) operations for computing all eigenvalues and eigenvectors ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
We present a new parallel algorithm for the dense symmetric eigenvalue/eigenvector problem that is based upon the tridiagonal eigensolver, Algorithm MR 3, recently developed by Dhillon and Parlett. Algorithm MR 3 has a complexity of O(n 2) operations for computing all eigenvalues and eigenvectors of a symmetric tridiagonal problem. Moreover the algorithm requires only O(n) extra workspace and can be adapted to compute any subset of k eigenpairs in O(nk) time. In contrast, all earlier stable parallel algorithms for the tridiagonal eigenproblem require O(n 3) operations in the worst case, while some implementations, such as divide and conquer, have an extra O(n 2) memory requirement. The proposed parallel algorithm balances the workload equally among the processors by traversing a matrixdependent representation tree which captures the sequence of computations performed by Algorithm MR 3. The resulting implementation allows problems of very large size to be solved efficiently—the largest dense eigenproblem solved incore on a 256 processor machine with 2 GBytes of memory per processor is for a matrix of size 128,000 × 128,000, which required about 8 hours of CPU time. We present comparisons with other eigensolvers and results on matrices that arise in the applications of computational quantum chemistry and finite element modeling of automobile bodies.
On a Classical Method for Computing Eigenvectors
"... One of the oldest methods for computing an eigenvector of a matrix F is based on the solution of a set of homogeneous equations which can be traced back to the times of Cauchy (1829). The principal di culty of this approach was identi ed by Wilkinson (1958). We remove this obstacle and analyse the v ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
One of the oldest methods for computing an eigenvector of a matrix F is based on the solution of a set of homogeneous equations which can be traced back to the times of Cauchy (1829). The principal di culty of this approach was identi ed by Wilkinson (1958). We remove this obstacle and analyse the viability of this classical method. The key to the analysis is provided by the reciprocals of the diagonal elements of the inverse of the matrix F; I, where is a shift, approximating an eigenvalue. The nal missing link is a perturbation result due to Sherman and Morrison who proved that F; 1=fF;1 gj�ieiej is singular. We extend this result to the block case. Finally, we give a new impetus for Rayleigh quotient and Laguerre iterations.