Results 1  10
of
13
Barycentric Lagrange Interpolation
, 2004
"... Barycentric interpolation is a variant of Lagrange polynomial interpolation that is fast and stable. It deserves to be known as the standard method of polynomial interpolation. ..."
Abstract

Cited by 94 (6 self)
 Add to MetaCart
Barycentric interpolation is a variant of Lagrange polynomial interpolation that is fast and stable. It deserves to be known as the standard method of polynomial interpolation.
Orthogonal Eigenvectors and Relative Gaps
, 2002
"... Let LDLt be the triangular factorization of a real symmetric n\Theta n tridiagonal matrix so that L is a unit lower bidiagonal matrix, D is diagonal. Let (*; v) be an eigenpair, * 6 = 0, with the property that both * and v are determined to high relative accuracy by the parameters in L and D. Suppo ..."
Abstract

Cited by 50 (16 self)
 Add to MetaCart
(Show Context)
Let LDLt be the triangular factorization of a real symmetric n\Theta n tridiagonal matrix so that L is a unit lower bidiagonal matrix, D is diagonal. Let (*; v) be an eigenpair, * 6 = 0, with the property that both * and v are determined to high relative accuracy by the parameters in L and D. Suppose also that the relative gap between * and its nearest neighbor _ in the spectrum exceeds 1=n; nj * \Gamma _j? j*j. This paper presents a new O(n) algorithm and a proof that, in the presence of roundoff error, the algorithm computes an approximate eigenvector ^v that is accurate to working precision: j sin &quot;(v; ^v)j = O(n&quot;), where &quot; is the roundoff unit. It follows that ^v is numerically orthogonal to all the other eigenvectors. This result forms part of a program to compute numerically orthogonal eigenvectors without resorting to the GramSchmidt process. The contents of this paper provide a highlevel description and theoretical justification for LAPACK (version 3.0) subroutine DLAR1V.
The design and implementation of the MRRR algorithm
 ACM Trans. Math. Software
, 2004
"... In the 1990’s, Dhillon and Parlett devised the algorithm of multiple relatively robust representations (MRRR) for computing numerically orthogonal eigenvectors of a symmetric tridiagonal matrix T with O(n2) cost. While previous publications related to MRRR focused on theoretical aspects of the algor ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
In the 1990’s, Dhillon and Parlett devised the algorithm of multiple relatively robust representations (MRRR) for computing numerically orthogonal eigenvectors of a symmetric tridiagonal matrix T with O(n2) cost. While previous publications related to MRRR focused on theoretical aspects of the algorithm, a documentation of software issues has been missing. In this article, we discuss the design and implementation of the new MRRR version STEGR that will be included in the next LAPACK release. By giving an algorithmic description of MRRR and identifying governing parameters, we hope to make STEGR more easily accessible and suitable for future performance tuning. Furthermore, this should help users understand design choices and tradeoffs when using the code.
Product eigenvalue problems
 SIAM Review
, 2005
"... Abstract. Many eigenvalue problems are most naturally viewed as product eigenvalue problems. The eigenvalues of a matrix A are wanted, but A is not given explicitly. Instead it is presented as a product of several factors: A = AkAk−1 ···A1. Usually more accurate results are obtained by working with ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Many eigenvalue problems are most naturally viewed as product eigenvalue problems. The eigenvalues of a matrix A are wanted, but A is not given explicitly. Instead it is presented as a product of several factors: A = AkAk−1 ···A1. Usually more accurate results are obtained by working with the factors rather than forming A explicitly. For example, if we want eigenvalues/vectors of B T B, it is better to work directly with B and not compute the product. The intent of this paper is to demonstrate that the product eigenvalue problem is a powerful unifying concept. Diverse examples of eigenvalue problems are discussed and formulated as product eigenvalue problems. For all but a couple of these examples it is shown that the standard algorithms for solving them are instances of a generic GR algorithm applied to a related cyclic matrix.
Accurately Counting Singular Values of Bidiagonal Matrices
 SIAM J. Matrix Anal. Appl
, 1998
"... We have developed algorithms to count singular values of a bidiagonal matrix which are greater than a specified value. This requires the transformation of the singular value problem to an equivalent symmetric eigenvalue problem. The counting of singular values is paramount in the design of bisection ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
We have developed algorithms to count singular values of a bidiagonal matrix which are greater than a specified value. This requires the transformation of the singular value problem to an equivalent symmetric eigenvalue problem. The counting of singular values is paramount in the design of bisection and multisection type algorithms for computing singular values on serial and parallel machines. The algorithms are based on the eigenvalues of BB t , B t B and the 2n \Theta 2n zerodiagonal tridiagonal matrix which is permutationally equivalent to the JordanWielandt form 0 B B t 0 where B is an n \Theta n bidiagonal matrix. The two product matrices, which do not have to be formed explicitly, lead to the progressive and stationary qd algorithms of Rutishauser. The algorithm based on the zerodiagonal matrix, which we have named the GolubKahan form, may be considered as a combination of both the progressive and stationary qd algorithms. We study important properties such as the ...
Lapack working note 172: Benefits of IEEE754 features in modern symmetric tridiagonal eigensolvers
, 2005
"... Bisection is one of the most common methods used to compute the eigenvalues of symmetric tridiagonal matrices. Bisection relies on the Sturm count: for a given shift σ, the number of negative pivots in the factorization T − σI = LDL T equals the number of eigenvalues of T that are smaller than σ. I ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
Bisection is one of the most common methods used to compute the eigenvalues of symmetric tridiagonal matrices. Bisection relies on the Sturm count: for a given shift σ, the number of negative pivots in the factorization T − σI = LDL T equals the number of eigenvalues of T that are smaller than σ. In IEEE754 arithmetic, the value ∞ permits the computation to continue past a zero pivot, producing a correct Sturm count when T is unreduced. Demmel and Li showed in the 90s that using ∞ rather than testing for zero pivots within the loop could improve performance significantly on certain architectures. When eigenvalues are to be computed to high relative accuracy, it is often preferable to work with LDL T factorizations instead of the original tridiagonal T, see for example the MRRR algorithm. In these cases, the Sturm count has to be computed from LDL T. The differential stationary and progressive qds algorithms are the methods of choice. While it seems trivial to replace T by LDL T, in reality these algorithms are more complicated: in IEEE754 arithmetic, a zero pivot produces an overflow, followed by an invalid exception (NaN), that renders the Sturm count incorrect. We present alternative, safe formulations that are guaranteed to produce the correct result. Benchmarking
Barycentric Interpolation
"... Abstract This survey focusses on the method of barycentric interpolation, which ties up to the ideas that August Ferdinand Möbius published in his seminal work “Der barycentrische Calcul ” in 1827. For univariate data, it leads to a special kind of rational interpolation which is guaranteed to have ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract This survey focusses on the method of barycentric interpolation, which ties up to the ideas that August Ferdinand Möbius published in his seminal work “Der barycentrische Calcul ” in 1827. For univariate data, it leads to a special kind of rational interpolation which is guaranteed to have no poles and favourable approximation properties. We further discuss how to extend this idea to bivariate data, both for scattered data and for data given at the vertices of a polygon. 1
Accurately Counting Singular Values of Bidiagonal Matrices
"... Wehave developed algorithms to count singular values of a bidiagonal matrix which are greater than a speci ed value. This requires the transformation of the singular value problem to an equivalent symmetric eigenvalue problem. The counting of singular values is paramount in the design of bisection a ..."
Abstract
 Add to MetaCart
(Show Context)
Wehave developed algorithms to count singular values of a bidiagonal matrix which are greater than a speci ed value. This requires the transformation of the singular value problem to an equivalent symmetric eigenvalue problem. The counting of singular values is paramount in the design of bisection and multisection type algorithms for computing singular values on serial and parallel machines. The algorithms are based on the eigenvalues of BB t, B t B and the 2n 2n zerodiagonal tridiagonal matrix which ispermutationally equivalent to the Jordan0 B Wielandt form B t where B is an n n bidiagonal matrix. The two product 0 matrices, whichdonothaveto be formed explicitly, lead to the progressive and stationary qd algorithms of Rutishauser. The algorithm based on the zerodiagonal matrix, which wehavenamed the GolubKahan form, may be considered as a combination of both the progressive and stationary qd algorithms. We study important properties such as the backward error analysis, the monotonicity of the inertia count and the scaling of data which guarantee the accuracy and the integrity of these algorithms. For high relative accuracy of tiny singular values, the algorithm based on the GolubKahan form is the best choice. However, if such accuracy is not required or requested, the di erential progressive and di erential stationary qd algorithms with certain modi cations are adequate and more e cient.
A Note On The Accuracy Of Symmetric Eigenreduction Algorithms
, 1996
"... . We present some experimental results illustrating the fact that on highly illconditioned Hermitian matrices the relative accuracy of computed small eigenvalues by QR eigenreduction may drastically depend on the initial permutation of the rows and columns. Mostly there was an "accurate " ..."
Abstract
 Add to MetaCart
(Show Context)
. We present some experimental results illustrating the fact that on highly illconditioned Hermitian matrices the relative accuracy of computed small eigenvalues by QR eigenreduction may drastically depend on the initial permutation of the rows and columns. Mostly there was an "accurate " permutation, but there does not seem to be an easy method to get at it. For banded matrices, like those from structural mechanics, the accurate prepermutation, if it existed, was mostly non banded. This is particularly true of tridiagonal matrices which shows that the tridiagonalization is not the only factor responsible for the inaccuracy of the eigenvalues. Key words. LAPACK, QR method, Jacobi method, Hermitian matrices, eigenvalue computation. AMS subject classification. 65F15. 1. Introduction. Classical error analysis of common symmetric eigenreduction algorithms like QR 1 or Jacobi is based on two facts: (i) the use of orthogonal elementary transformations and (ii) the spectral norm esti...