Results 1  10
of
47
ABLE: an adaptive block Lanczos method for nonHermitian eigenvalue problems
 SIAM Journal on Matrix Analysis and Applications
, 1999
"... Abstract. This work presents an adaptive block Lanczos method for largescale nonHermitian Eigenvalue problems (henceforth the ABLE method). The ABLE method is a block version of the nonHermitian Lanczos algorithm. There are three innovations. First, an adaptive blocksize scheme cures (near) break ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
Abstract. This work presents an adaptive block Lanczos method for largescale nonHermitian Eigenvalue problems (henceforth the ABLE method). The ABLE method is a block version of the nonHermitian Lanczos algorithm. There are three innovations. First, an adaptive blocksize scheme cures (near) breakdown and adapts the blocksize to the order of multiple or clustered eigenvalues. Second, stopping criteria are developed that exploit the semiquadratic convergence property of the method. Third, a wellknown technique from the Hermitian Lanczos algorithm is generalized to monitor the loss of biorthogonality and maintain semibiorthogonality among the computed Lanczos vectors. Each innovation is theoretically justified. Academic model problems and real application problems are solved to demonstrate the numerical behaviors of the method. Key words. method nonHermitian matrices, eigenvalue problem, spectral transformation, Lanczos AMS subject classifications. 65F15, 65F10 PII. S0895479897317806
Asymptotic Convergence of Conjugate Gradient Methods for the Partial Symmetric Eigenproblem
, 1994
"... Recently an efficient method (DACG) for the partial solution of the symmetric generalized eigenproblem Ax = Bx has been developed, based on the conjugate gradient (CG) minimization of the Rayleigh quotient over successive deflated subspaces of decreasing size. The present paper provides a numerical ..."
Abstract

Cited by 24 (10 self)
 Add to MetaCart
Recently an efficient method (DACG) for the partial solution of the symmetric generalized eigenproblem Ax = Bx has been developed, based on the conjugate gradient (CG) minimization of the Rayleigh quotient over successive deflated subspaces of decreasing size. The present paper provides a numerical analysis of the asymptotic convergence rate ae j of DACG in the calculation of the eigenpair j , u j , when the scheme is preconditioned with A \Gamma1 . It is shown that, when the search direction are Aconjugate, ae j is well approximatedby 4=¸ j , where ¸ j is the Hessian condition number of a Rayleigh quotient defined in appropriate oblique complements of the space spanned by the leftmost eigenvectors u1 , u2 , : : :, u j\Gamma1 already calculated. It is also shown that 1=¸ j is equal to the relative separation between the eigenvalue j currently sought and the next higher one j+1 . A modification of DACG (MDACG) is studied, which involves a new set of CG search directions which ar...
ThickRestart Lanczos Method for Symmetric Eigenvalue Problems
 SIAM J. MATRIX ANAL. APPL
, 1998
"... For real symmetric eigenvalue problems, there are a number of algorithms that are mathematically equivalent, for example, the Lanczos algorithm, the Arnoldi method and the unpreconditioned Davidson method. The Lanczos algorithm is often preferred because it uses significantly fewer arithmetic ope ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
For real symmetric eigenvalue problems, there are a number of algorithms that are mathematically equivalent, for example, the Lanczos algorithm, the Arnoldi method and the unpreconditioned Davidson method. The Lanczos algorithm is often preferred because it uses significantly fewer arithmetic operations per iteration. To limit the maximum memory usage, these algorithms are often restarted. In recent years, a number of effective restarting schemes have been developed for the Arnoldi method and the Davidson method. This paper describes a simple restarting scheme for the Lanczos algorithm. This restarted Lanczos algorithm uses as many arithmetic operations as the original algorithm. Theoretically, this restarted Lanczos method is equivalent to the implicitly restarted Arnoldi method and the thickrestart Davidson method. Because it uses less arithmetic operations than the others, it is an attractive alternative for solving symmetric eigenvalue problems.
Low Rank Matrix Approximation Using The Lanczos Bidiagonalization Process With Applications
 SIAM J. Sci. Comput
, 2000
"... Low rank approximation of large and/or sparse matrices is important in many applications. We show that good low rank matrix approximations can be directly obtained from the Lanczos bidiagonalization process without computing singular value decomposition. We also demonstrate that a socalled oneside ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
Low rank approximation of large and/or sparse matrices is important in many applications. We show that good low rank matrix approximations can be directly obtained from the Lanczos bidiagonalization process without computing singular value decomposition. We also demonstrate that a socalled onesided reorthogonalization process can be used to maintain adequate level of orthogonality among the Lanczos vectors and produce accurate low rank approximations. This technique reduces the computational cost of the Lanczos bidiagonalization process. We illustrate the efficiency and applicability of our algorithm using numerical examples from several applications areas.
BLZPACK: Description and User's Guide
, 1997
"... This report describes BLZPACK (an acronym for Block _Lanczos Package), an imple mentation of the block Lanczos algorithm intended for the solution of eigenproblems involving real, sparse, symmetric matrices. The package works in an interactive way, so the matrices of the target problem are not p ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
This report describes BLZPACK (an acronym for Block _Lanczos Package), an imple mentation of the block Lanczos algorithm intended for the solution of eigenproblems involving real, sparse, symmetric matrices. The package works in an interactive way, so the matrices of the target problem are not passed as arguments for the interface subprogram.
Computing Smallest Singular Triplets with Implicitly Restarted Lanczos Bidiagonalization
 APPL. NUMER. MATH
, 2004
"... A matrixfree algorithm, IRLANB, for the efficient computation of the smallest singular triplets of large and possibly sparse matrices is described. Key characteristics of the approach are its use of Lanczos bidiagonalization, implicit restarting, and harmonic Ritz values. The algorithm also uses a ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
A matrixfree algorithm, IRLANB, for the efficient computation of the smallest singular triplets of large and possibly sparse matrices is described. Key characteristics of the approach are its use of Lanczos bidiagonalization, implicit restarting, and harmonic Ritz values. The algorithm also uses a deflation stategy that can be applied directly on Lanczos bidiagonalization. A refinenement postprocessing phase is applied on the converged singular vectors. The computational costs of the above techniques are kept small as they make direct use of the bidiagonal form obtained in the course of the Lanczos factorization. Several numerical experiments with the method are presented that illustrate its effectiveness and indicate that it performs well compared to existing codes.
Computing charge densities with partially reorthogonalized lanczos
 Comp. Phys. Comm
, 2005
"... This paper considers the problem of computing charge densities in a density functional theory (DFT) framework. In contrast to traditional, diagonalizationbased, methods, we utilize a technique which exploits a Lanczos basis, without explicit reference to individual eigenvectors. The key ingredient ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
This paper considers the problem of computing charge densities in a density functional theory (DFT) framework. In contrast to traditional, diagonalizationbased, methods, we utilize a technique which exploits a Lanczos basis, without explicit reference to individual eigenvectors. The key ingredient of this new approach is a partial reorthogonalization strategy whose goal is to ensure a good level of orthogonality of the basis vectors. The experiments reveal that the method can be a few times faster than ARPACK, the implicit restart Lanczos method. This is achievable by exploiting more memory and BLAS3 (dense) computations while avoiding the frequent updates of eigenvectors inherent to all restarted Lanczos methods. 1
DEFLATED AND RESTARTED SYMMETRIC LANCZOS METHODS FOR EIGENVALUES AND LINEAR EQUATIONS WITH MULTIPLE Righthand Sides
, 2008
"... A deflated restarted Lanczos algorithm is given for both solving symmetric linear equations and computing eigenvalues and eigenvectors. The restarting limits the storage so that finding eigenvectors is practical. Meanwhile, the deflating from the presence of the eigenvectors allows the linear equat ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
A deflated restarted Lanczos algorithm is given for both solving symmetric linear equations and computing eigenvalues and eigenvectors. The restarting limits the storage so that finding eigenvectors is practical. Meanwhile, the deflating from the presence of the eigenvectors allows the linear equations to generally have good convergence in spite of the restarting. Some reorthogonalization is necessary to control roundoff error, and several approaches are discussed. The eigenvectors generated while solving the linear equations can be used to help solve systems with multiple righthand sides. Experiments are given with large matrices from quantum chromodynamics that have many righthand sides.
Computation of large invariant subspaces using polynomial filtered Lanczos iterations with applications in density functional theory
 Copyright © by SIAM. Unauthorized reproduction of this article is prohibited. MATTHIAS W. SEEGER AND HANNES NICKISCH
"... Abstract. The most expensive part of all electronic structure calculations based on density functional theory lies in the computation of an invariant subspace associated with some of the smallest eigenvalues of a discretized Hamiltonian operator. The dimension of this subspace typically depends on t ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract. The most expensive part of all electronic structure calculations based on density functional theory lies in the computation of an invariant subspace associated with some of the smallest eigenvalues of a discretized Hamiltonian operator. The dimension of this subspace typically depends on the total number of valence electrons in the system, and can easily reach hundreds or even thousands when large systems with many atoms are considered. At the same time, the discretization of Hamiltonians associated with large systems yields very large matrices, whether with planewave or realspace discretizations. The combination of these two factors results in one of the most significant bottlenecks in computational materials science. In this paper we show how to efficiently compute a large invariant subspace associated with the smallest eigenvalues of a symmetric/Hermitian matrix using polynomially filtered Lanczos iterations. The proposed method does not try to extract individual eigenvalues and eigenvectors. Instead, it constructs an orthogonal basis of the invariant subspace by combining two main ingredients. The first is a filtering technique to dampen the undesirable contribution of the largest eigenvalues at each matrixvector product in the Lanczos algorithm. This technique employs a wellselected low pass filter polynomial, obtained via a conjugate residualtype algorithm in polynomial space. The second ingredient is the Lanczos algorithm with partial reorthogonalization. Experiments are reported to illustrate the efficiency of the proposed scheme compared to stateoftheart implicitly restarted techniques. Key words. theory polynomial filtering, conjugate residual, Lanczos algorithm, density functional
Polynomial filtered lanczos iterations with applications in density functional theory
 SIAM J. Matrix Anal. Appl
, 2005
"... The most expensive part of all Electronic Structure Calculations based on Density Functional Theory, lies in the computation of an invariant subspace associated with some of the smallest eigenvalues of a discretized Hamiltonian operator. The dimension of this subspace typically depends on the total ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
The most expensive part of all Electronic Structure Calculations based on Density Functional Theory, lies in the computation of an invariant subspace associated with some of the smallest eigenvalues of a discretized Hamiltonian operator. The dimension of this subspace typically depends on the total number of valence electrons in the system, and can easily reach hundreds or even thousands when large systems with many atoms are considered. At the same time, the discretization of Hamiltonians associated with large systems yields very large matrices, whether with planewave or real space discretizations. The combination of these two factors results in one of the most significant bottlenecks in Computational Materials Science. In this paper we show how to efficiently compute a large invariant subspace associated with the smallest eigenvalues of a Hermitian matrix using polynomially filtered Lanczos iterations. The proposed method does not try to extract individual eigenvalues and eigenvectors. Instead, it constructs an orthogonal basis of the invariant subspace by combining two main ingredients. The first is a filtering technique to dampen the undesirable contribution of the largest eigenvalues at each matrixvector product in the Lanczos algorithm. This techniques employs a wellselected low pass filter polynomial, obtained via a conjugate residualtype algorithm in polynomial space. The second ingredient is the Lanczos algorithm with partial reorthogonalization. Experiments are reported to illustrate the efficiency of the proposed scheme compared to stateoftheart implicitly restarted techniques.