## On Tridiagonalizing and Diagonalizing Symmetric Matrices with Repeated Eigenvalues (1995)

Venue: | PREPRINT ANL/MCS-P5454-1095, MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY |

Citations: | 2 - 2 self |

### BibTeX

@INPROCEEDINGS{Bischof95ontridiagonalizing,

author = {Christian H. Bischof and Xiaobai Sun},

title = {On Tridiagonalizing and Diagonalizing Symmetric Matrices with Repeated Eigenvalues},

booktitle = {PREPRINT ANL/MCS-P5454-1095, MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY},

year = {1995},

publisher = {}

}

### OpenURL

### Abstract

We describe a divide-and-conquer tridiagonalization approach for matrices with repeated eigenvalues. Our algorithm hinges on the fact that, under easily constructively verifiable conditions, a symmetric matrix with bandwidth b and k distinct eigenvalues must be block diagonal with diagonal blocks of size at most bk. A slight modification of the usual orthogonal band-reduction algorithm allows us to reveal this structure, which then leads to potential parallelism in the form of independent diagonal blocks. Compared with the usual Householder reduction algorithm, the new approach exhibits improved data locality, significantly more scope for parallelism, and the potential to reduce arithmetic complexity by close to 50% for matrices that have only two numerically distinct eigenvalues. The actual improvement depends to a large extent on the number of distinct eigenvalues and a good estimate thereof. However, at worst the algorithm behaves like a successive bandreduction approach to tridia...

### Citations

446 |
LAPACK Usersâ€™ Guide
- Anderson, Bai, et al.
- 1995
(Show Context)
Citation Context ...d hence is likely to be preferable on cache-based architectures. This block algorithm has been incorporated into the LAPACK library of portable linear algebra codes for high-performance architectures =-=[1, 2]-=-. Parallel versions for distributedmemory machines of the standard algorithm and of the block algorithm are described in [12] and in [13], respectively. A different approach to tridiagonalization is t... |

195 | The Theory of Matrices in Numerical Analysis - Householder - 1964 |

112 |
The WY representation for products of Householder matrices
- BISCHOF, LOAN
- 1987
(Show Context)
Citation Context ...ure. Since our algorithm (at least in the early stages) reduces matrices to banded form with a relatively wide band, it is easy to block the Householder transformations by using the WY representation =-=[11]-=- or the compact WY representation [20], as has, for example, been described in [17]. In this fashion, one can easily capitalize on the favorable memory transfer characteristics of block algorithms. Re... |

80 | Block reduction of matrices to condensed forms for eigenvalue computations
- DONGARRA, SORENSEN, et al.
- 1989
(Show Context)
Citation Context ...employs mainly matrix-vector multiplications and symmetric rank-one updates, which require more memory references than the matrix-matrix operations [9,8,14]. The block tridiagonalization algorithm in =-=[5, 15]-=- combines sets of p successive symmetric rank1 updates into one symmetric rank-p update, at the cost of O(2pn 2 ) extra flops. As a result, this algorithm exhibits improved data locality and hence is ... |

61 | A storage efficient WY representation for products of Householder transformations - SCHREIBER, LOAN - 1989 |

39 |
LAPACK Usersâ€™ Guide, Release 2.0
- Anderson, Bai
- 1995
(Show Context)
Citation Context ...d hence is likely to be preferable on cache-based architectures. This block algorithm has been incorporated into the LAPACK library of portable linear algebra codes for high-performance architectures =-=[1, 2]-=-. Parallel versions for distributedmemory machines of the standard algorithm and of the block algorithm are described in [12] and in [13], respectively. A different approach to tridiagonalization is t... |

35 | Prospectus for the Development of a Linear Algebra Library for High-Performance Computers
- Demmel, Dongarra, et al.
- 1987
(Show Context)
Citation Context ...rix is accumulated at the same time. This algorithm employs mainly matrix-vector multiplications and symmetric rank-one updates, which require more memory references than the matrix-matrix operations =-=[9,8,14]-=-. The block tridiagonalization algorithm in [5, 15] combines sets of p successive symmetric rank1 updates into one symmetric rank-p update, at the cost of O(2pn 2 ) extra flops. As a result, this algo... |

31 |
de Geijn. Reduction to condensed form for the eigenvalue problem on distributed memory computers. Computer Science Dept
- Dongarra, van
- 1991
(Show Context)
Citation Context ...able linear algebra codes for high-performance architectures [1, 2]. Parallel versions for distributedmemory machines of the standard algorithm and of the block algorithm are described in [12] and in =-=[13]-=-, respectively. A different approach to tridiagonalization is the so-called successive band reduction (SBR) method, which completes the tridiagonal reduction through a sequence of band reductions [10,... |

27 | A parallelizable eigensolver for real diagonalizable matrices with real eigenvalues
- Huss-Lederman, Tsao, et al.
- 1997
(Show Context)
Citation Context ...point operations. In addition, the need for data movement is reduced. One particular situation where repeated eigenvalues arise is in the context of invariant-subspace methods for eigenvalue problems =-=[3,19,6,4]-=-, where a matrix with only two distinct, predetermined, eigenvalues is generated either by repeated application of incomplete beta functions [19] or the matrix sign function [4]. In exact arithmetic, ... |

22 | Parallel tridiagonalization through twostep band reduction
- Bischof, Lang, et al.
- 1984
(Show Context)
Citation Context ...[13], respectively. A different approach to tridiagonalization is the so-called successive band reduction (SBR) method, which completes the tridiagonal reduction through a sequence of band reductions =-=[10,7]-=-. This approach leads to algorithms that exhibit an even greater degree of memory locality, among other desirable features. In this paper we show that if the number k (say) of distinct eigenvalues of ... |

19 | Structured Second- and Higher-Order Derivatives through Univariate
- Bischof
- 1993
(Show Context)
Citation Context ...employs mainly matrix-vector multiplications and symmetric rank-one updates, which require more memory references than the matrix-matrix operations [9,8,14]. The block tridiagonalization algorithm in =-=[5, 15]-=- combines sets of p successive symmetric rank1 updates into one symmetric rank-p update, at the cost of O(2pn 2 ) extra flops. As a result, this algorithm exhibits improved data locality and hence is ... |

13 | The PRISM project: Infrastructure and algorithms for parallel eigensolvers
- Bischof, Huss-Lederman, et al.
- 1993
(Show Context)
Citation Context ...point operations. In addition, the need for data movement is reduced. One particular situation where repeated eigenvalues arise is in the context of invariant-subspace methods for eigenvalue problems =-=[3,19,6,4]-=-, where a matrix with only two distinct, predetermined, eigenvalues is generated either by repeated application of incomplete beta functions [19] or the matrix sign function [4]. In exact arithmetic, ... |

10 | Solution of large, dense symmetric generalized eigenvalue problems using secondary storage - Grimes, Simon - 1988 |

9 |
Fundamental Linear Algebra Computations on High-Performance
- Bischof
- 1990
(Show Context)
Citation Context ...rix is accumulated at the same time. This algorithm employs mainly matrix-vector multiplications and symmetric rank-one updates, which require more memory references than the matrix-matrix operations =-=[9,8,14]-=-. The block tridiagonalization algorithm in [5, 15] combines sets of p successive symmetric rank1 updates into one symmetric rank-p update, at the cost of O(2pn 2 ) extra flops. As a result, this algo... |

9 |
D.: A Parallel Householder Tridiagonalization Stratagem Using Scattered Square Decomposition
- Chang, Utku, et al.
- 1988
(Show Context)
Citation Context ...rary of portable linear algebra codes for high-performance architectures [1, 2]. Parallel versions for distributedmemory machines of the standard algorithm and of the block algorithm are described in =-=[12]-=- and in [13], respectively. A different approach to tridiagonalization is the so-called successive band reduction (SBR) method, which completes the tridiagonal reduction through a sequence of band red... |

5 |
A framework for band reduction and tridiagonalization of symmetric matrices
- Bischof, Sun
- 1992
(Show Context)
Citation Context ...[13], respectively. A different approach to tridiagonalization is the so-called successive band reduction (SBR) method, which completes the tridiagonal reduction through a sequence of band reductions =-=[10,7]-=-. This approach leads to algorithms that exhibit an even greater degree of memory locality, among other desirable features. In this paper we show that if the number k (say) of distinct eigenvalues of ... |

4 |
Evolution of Numerical Software for Dense Linear Algebra
- Dongarra, Hammarling
- 1989
(Show Context)
Citation Context ...rix is accumulated at the same time. This algorithm employs mainly matrix-vector multiplications and symmetric rank-one updates, which require more memory references than the matrix-matrix operations =-=[9,8,14]-=-. The block tridiagonalization algorithm in [5, 15] combines sets of p successive symmetric rank1 updates into one symmetric rank-p update, at the cost of O(2pn 2 ) extra flops. As a result, this algo... |

1 |
A divide-and-conquer algorithm for the eigenproblem via complementary invariant subspace decomposition
- Auslander, Tsao
- 1989
(Show Context)
Citation Context ...point operations. In addition, the need for data movement is reduced. One particular situation where repeated eigenvalues arise is in the context of invariant-subspace methods for eigenvalue problems =-=[3,19,6,4]-=-, where a matrix with only two distinct, predetermined, eigenvalues is generated either by repeated application of incomplete beta functions [19] or the matrix sign function [4]. In exact arithmetic, ... |

1 |
Design of a parallel nonsymmetric toolbox, part i
- Bai, Demmel
- 1992
(Show Context)
Citation Context |