Results 1  10
of
14
A geometric theory for preconditioned inverse iteration III: A short and sharp convergence estimate for generalized eigenvalue problems
, 2003
"... In two previous papers by Neymeyr [Linear Algebra Appl. 322 (13) (2001) 61; 322 (1 3) (2001) 87], a sharp, but cumbersome, convergence rate estimate was proved for a simple preconditioned eigensolver, which computes the smallest eigenvalue together with the corresponding eigenvector of a symmetr ..."
Abstract

Cited by 33 (8 self)
 Add to MetaCart
In two previous papers by Neymeyr [Linear Algebra Appl. 322 (13) (2001) 61; 322 (1 3) (2001) 87], a sharp, but cumbersome, convergence rate estimate was proved for a simple preconditioned eigensolver, which computes the smallest eigenvalue together with the corresponding eigenvector of a symmetric positive definite matrix, using a preconditioned gradient minimization of the Rayleigh quotient. In the present paper, we discover and prove a much shorter and more elegant (but still sharp in decisive quantities) convergence rate estimate of the same method that also holds for a generalized symmetric definite eigenvalue problem. The new estimate is simple enough to stimulate a search for a more straightforward proof technique that could be helpful to investigate such a practically important method as the locally optimal block preconditioned conjugate gradient eigensolver.
Preconditioned Eigensolvers  An Oxymoron?
, 1998
"... A short survey of some results on preconditioned iterative methods for symmetric eigenvalue problems is presented. The survey is by no means complete and reflects the author's personal interests and biases, with emphasis on author's own contributions. The author surveys most of the important theoret ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
A short survey of some results on preconditioned iterative methods for symmetric eigenvalue problems is presented. The survey is by no means complete and reflects the author's personal interests and biases, with emphasis on author's own contributions. The author surveys most of the important theoretical results and ideas which have appeared in the Soviet literature, adding references to work published in the western literature mainly to preserve the integrity of the topic. The aim of this paper is to introduce a systematic classification of preconditioned eigensolvers, separating the choice of a preconditioner from the choice of an iterative method. A formal definition of a preconditioned eigensolver is given. Recent developments in the area are mainly ignored, in particular, on Davidson's method. Domain decomposition methods for eigenproblems are included in the framework of preconditioned eigensolvers.
Multilevel Preconditioners for Solving Eigenvalue Problems Occuring in the Design of Resonant Cavities
, 2003
"... We investigate eigensolvers for computing a few of the smallest eigenvalues of a generalized eigenvalue problem resulting from the nite element discretization of the time independent Maxwell equation. Various multilevel preconditioners are employed to improve the convergence and memory consumption ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We investigate eigensolvers for computing a few of the smallest eigenvalues of a generalized eigenvalue problem resulting from the nite element discretization of the time independent Maxwell equation. Various multilevel preconditioners are employed to improve the convergence and memory consumption of the JacobiDavidson algorithm and of the locally optimal block preconditioned conjugate gradient (LOBPCG) method. We present numerical results of very large eigenvalue problems originating from the design of resonant cavities of particle accelerators.
Computing Eigenelements of Real Symmetric Matrices Via Optimization
 Comput. Optim. Appl
, 1999
"... In certain circumstances, it is more advantageous to use an optimization approach in order to solve the generalized eigenproblem Ax = Bx, where A and B are real symmetric matrices and B is positive definite. This is the case namely when the matrices A and B are very large and the computational co ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In certain circumstances, it is more advantageous to use an optimization approach in order to solve the generalized eigenproblem Ax = Bx, where A and B are real symmetric matrices and B is positive definite. This is the case namely when the matrices A and B are very large and the computational cost of solving, with high accuracy, systems of equations involving these matrices is prohibitive. The optimization approach involves usually optimizing the Rayleigh quotient. We first propose alternative objective functions to solve the (generalized) eigenproblem via (unconstrained) optimization, we describe the variational properties of these functions, and we report computational experiments. We then introduce some optimization algorithms (based on one of these formulations) designed to compute the largest eigenpair. According to preliminary numerical experiments, this work leads the way to methods which compare favourably with the Lanczos method to compute the largest eigenpair of...
Minimization Principle for Linear Response Eigenvalue Problem with Applications
, 2011
"... We present a minimization principle for the sum of the first few smallest positive eigenvalues and Cauchylike interlacing inequalities for the linear response (a.k.a random phase approximation) eigenvalue problem arising from the calculation of excitation states of manyparticle systems, a hot topi ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
We present a minimization principle for the sum of the first few smallest positive eigenvalues and Cauchylike interlacing inequalities for the linear response (a.k.a random phase approximation) eigenvalue problem arising from the calculation of excitation states of manyparticle systems, a hot topic among computational material scientists today for materials design to advance energy science. Subsequently, we develop the best approximations of these smallest positive eigenvalues by a structurepreserving subspace projection. Based on these newly established theoretical results, we outline conjugate gradientlike algorithms for simultaneously computing the first few smallest positive eigenvalues and associated eigenvectors. Finally, we present numerical examples to illustrate essential convergence behaviors of the proposed conjugate
A Comparison Of Algorithms For Modal Analysis In The Absence Of A Sparse Direct Method
, 2003
"... this report are the following: 1. replace the sparse direct method with a scalable preconditioned iterative method within the Lanczos algorithm; 2. replace the Lanczos algorithm with a scalable preconditioned eigenvalue algorithm (that perhaps better utilizes preconditioned iterative methods) ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
this report are the following: 1. replace the sparse direct method with a scalable preconditioned iterative method within the Lanczos algorithm; 2. replace the Lanczos algorithm with a scalable preconditioned eigenvalue algorithm (that perhaps better utilizes preconditioned iterative methods)
Why Preconditioning Gradient Type Eigensolvers?
, 2000
"... . Let's give the mesh discretization of an elliptic eigenvalue problem and the problem to determine the smallest eigenvalue together with an eigenvector. Gradient type methods solve this problem by consecutive correction steps, each in the direction of the negative gradient of the Rayleigh quotient. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
. Let's give the mesh discretization of an elliptic eigenvalue problem and the problem to determine the smallest eigenvalue together with an eigenvector. Gradient type methods solve this problem by consecutive correction steps, each in the direction of the negative gradient of the Rayleigh quotient. It is shown that the convergence rate of gradient methods, even with optimal scaling, converges to 1 if the mesh parameter tends to 0. In contrast to this, premultiplying the gradient vector by a preconditioner, which defines the preconditioned gradient method, leads to gridindependent convergence estimates, if the preconditioner is sufficiently accurate. Moreover, some suitable scaling strategy may lead to an improved convergence. For these reasons, preconditioning of gradient type methods is decisive to construct a reliable and efficient eigensolver for elliptic eigenvalue problems. 1. INTRODUCTION Let A be a symmetric positive definite matrix whose smallest eigenvalue 1 together with...
A Multigrid Method for the Complex Helmholtz Eigenvalue Problem
, 1998
"... Introduction The paper deals with the solution of the eigenvalue problem of the complex Helmholtz equation. We present an adaptive multigrid method for solving the nonselfadjoint algebraic eigenproblem arising from discretization with finite elements. A technological relevant numerical example, the ..."
Abstract
 Add to MetaCart
Introduction The paper deals with the solution of the eigenvalue problem of the complex Helmholtz equation. We present an adaptive multigrid method for solving the nonselfadjoint algebraic eigenproblem arising from discretization with finite elements. A technological relevant numerical example, the simulation of an integrated optical component containing Multi Quantum Well layers, is included. The task is to find a few eigenvalues and corresponding eigenfunctions u of the Helmholtz equation with Dirichlet boundary condition \Gamma\Deltau(x; y) \Gamma f(x; y) u(x; y) = u(x; y); (x; y) 2\Omega u(x; y) = 0; (x; y) 2 @\Omega ; where the region\Om
Conjugate Gradient Type Methods for Solving Large Scale Eigenvalue Problems
, 2010
"... Let us recall that for given symmetric A, B ∈ R n×n and B positive definite, the Rayleigh Quotient for the matrix pencil A − λB is defined by ρ(x) = xTAx ..."
Abstract
 Add to MetaCart
Let us recall that for given symmetric A, B ∈ R n×n and B positive definite, the Rayleigh Quotient for the matrix pencil A − λB is defined by ρ(x) = xTAx