Results 1  10
of
89
Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds
 Journal of Machine Learning Research
, 2003
"... The problem of dimensionality reduction arises in many fields of information processing, including machine learning, data compression, scientific visualization, pattern recognition, and neural computation. ..."
Abstract

Cited by 252 (8 self)
 Add to MetaCart
The problem of dimensionality reduction arises in many fields of information processing, including machine learning, data compression, scientific visualization, pattern recognition, and neural computation.
Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method
 SIAM J. Sci. Comput
, 2001
"... We describe new algorithms of the locally optimal block preconditioned conjugate gradient (LOBPCG) method for symmetric eigenvalue problems, based on a local optimization of a threeterm recurrence, and suggest several other new methods. To be able to compare numerically different methods in the cla ..."
Abstract

Cited by 82 (12 self)
 Add to MetaCart
We describe new algorithms of the locally optimal block preconditioned conjugate gradient (LOBPCG) method for symmetric eigenvalue problems, based on a local optimization of a threeterm recurrence, and suggest several other new methods. To be able to compare numerically different methods in the class, with different preconditioners, we propose a common system of model tests, using random preconditioners and initial guesses. As the "ideal" control algorithm, we advocate the standard preconditioned conjugate gradient method for finding an eigenvector as an element of the nullspace of the corresponding homogeneous system of linear equations under the assumption that the eigenvalue is known. We recommend that every new preconditioned eigensolver be compared with this "ideal" algorithm on our model test problems in terms of the speed of convergence, costs of every iteration, and memory requirements. We provide such comparison for our LOBPCG method. Numerical results establish that our algorithm is practically as efficient as the "ideal" algorithm when the same preconditioner is used in both methods. We also show numerically that the LOBPCG method provides approximations to first eigenpairs of about the same quality as those by the much more expensive global optimization method on the same generalized block Krylov subspace. We propose a new version of block Davidson's method as a generalization of the LOBPCG method. Finally, direct numerical comparisons with the JacobiDavidson method show that our method is more robust and converges almost two times faster.
A JacobiDavidson Iteration Method for Linear Eigenvalue Problems
 SIAM J. Matrix Anal. Appl
, 2000
"... . In this paper we propose a new method for the iterative computation of a few of the extremal eigenvalues of a symmetric matrix and their associated eigenvectors. The method is based on an old and almost unknown method of Jacobi. Jacobi's approach, combined with Davidson's method, leads to a new me ..."
Abstract

Cited by 63 (6 self)
 Add to MetaCart
. In this paper we propose a new method for the iterative computation of a few of the extremal eigenvalues of a symmetric matrix and their associated eigenvectors. The method is based on an old and almost unknown method of Jacobi. Jacobi's approach, combined with Davidson's method, leads to a new method that has improved convergence properties and that may be used for general matrices. We also propose a variant of the new method that may be useful for the computation of nonextremal eigenvalues as well. Key words. eigenvalues and eigenvectors, Davidson's method, Jacobi iterations, harmonic Ritz values AMS subject classifications. 65F15, 65N25 PII. S0036144599363084 1. Introduction. Suppose we want to compute one or more eigenvalues and their corresponding eigenvectors of the n n matrix A. Several iterative methods are available: Jacobi's diagonalization method [9], [23], the power method [9], the method of Lanczos [13], [23], Arnoldi's method [1], [26], and Davidson's method [4], ...
Dynamic Thick Restarting of the Davidson, and the Implicitly Restarted Arnoldi Methods
 SIAM J. Sci. Comput
, 1996
"... The Davidson method is a popular preconditioned variant of the Arnoldi method for solving large eigenvalue problems. For theoretical, as well as practical reasons the two methods are often used with restarting. Frequently, information is saved through approximated eigenvectors to compensate for the ..."
Abstract

Cited by 45 (21 self)
 Add to MetaCart
The Davidson method is a popular preconditioned variant of the Arnoldi method for solving large eigenvalue problems. For theoretical, as well as practical reasons the two methods are often used with restarting. Frequently, information is saved through approximated eigenvectors to compensate for the convergence impairment caused by restarting. We call this scheme of retaining more eigenvectors than needed `thick restarting', and prove that thick restarted, nonpreconditioned Davidson is equivalent to the implicitly restarted Arnoldi. We also establish a relation between thick restarted Davidson, and a Davidson method applied on a deflated system. The theory is used to address the question of which and how many eigenvectors to retain and motivates the development of a dynamic thick restarting scheme for the symmetric case, which can be used in both Davidson and implicit restarted Arnoldi. Several experiments demonstrate the efficiency and robustness of the scheme. Key words. Davidson me...
An ArnoldiSchur Algorithm for Large Eigenproblems
, 2000
"... Sorensen's iteratively restarted Arnoldi algorithm is one of the most successful and flexible methods for finding a few eigenpairs of a large matrix. However, the need to preserve structure of the Arnoldi decomposition, on which the algorithm is based, restricts the range of transformations that ..."
Abstract

Cited by 36 (2 self)
 Add to MetaCart
Sorensen's iteratively restarted Arnoldi algorithm is one of the most successful and flexible methods for finding a few eigenpairs of a large matrix. However, the need to preserve structure of the Arnoldi decomposition, on which the algorithm is based, restricts the range of transformations that can be performed on it. In consequence, it is difficult to deflate converged Ritz vectors from the decomposition. Moreover, the potential forward instability of the implicit QR algorithm can cause unwanted Ritz vectors to persist in the computation. In this paper we introduce a generalized Arnoldi decomposition that solves both problems in a natural and efficient manner.
Combination of JacobiDavidson and conjugate gradients for the partial symmetric eigenproblem
 NUMER. LINEAR ALGEBRA APPL
, 2002
"... To compute the smallest eigenvalues and associated eigenvectors of a real symmetric matrix, we consider the Jacobi–Davidson method with inner preconditioned conjugate gradient iterations for the arising linear systems. We show that the coe cient matrix of these systems is indeed positive de nite wit ..."
Abstract

Cited by 32 (6 self)
 Add to MetaCart
To compute the smallest eigenvalues and associated eigenvectors of a real symmetric matrix, we consider the Jacobi–Davidson method with inner preconditioned conjugate gradient iterations for the arising linear systems. We show that the coe cient matrix of these systems is indeed positive de nite with the smallest eigenvalue bounded away from zero. We also establish a relation between the residual norm reduction in these inner linear systems and the convergence of the outer process towards the desired eigenpair. From a theoretical point of view, this allows to prove the optimality of the method, in the sense that solving the eigenproblem implies only a moderate overhead compared with solving a linear system. From a practical point of view, this allows to set up a stopping strategy for the inner iterations that minimizes this overhead by exiting precisely at the moment where further progress would be useless with respect to the convergence of the outer process. These results are numerically illustrated on some model example. Direct comparison with some other eigensolvers is also provided.
Efficient expansion of subspaces in the JacobiDavidson method for standard and generalized eigenproblems
, 1998
"... We discuss approaches for an efficient handling of the correction equation in the JacobiDavidson method. The correction equation is effective in a subspace orthogonal to the current eigenvector approximation. The operator in the correction equation is a dense matrix, but it is composed from three f ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
We discuss approaches for an efficient handling of the correction equation in the JacobiDavidson method. The correction equation is effective in a subspace orthogonal to the current eigenvector approximation. The operator in the correction equation is a dense matrix, but it is composed from three factors that allow for a sparse representation. If the given matrix eigenproblem is sparse then one often aims for the construction of a preconditioner for that matrix. We discuss how to restrict this preconditioner effectively to the subspace orthogonal to the current eigenvector. The correction equation itself is formulated in terms of approximations for an eigenpair. In order to avoid misconvergence one has to make the right selection for the approximations, and this aspect will be discussed as well.
Nonlinear eigenvalue problems: A challenge for modern eigenvalue methods, GAMMReports
"... We discuss the state of the art in numerical solution methods for large scale polynomial or rational eigenvalue problems. We present the currently available solution methods such as the JacobiDavidson, Arnoldi or the rational Krylov method and analyze their properties. We briefly introduce a new li ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
We discuss the state of the art in numerical solution methods for large scale polynomial or rational eigenvalue problems. We present the currently available solution methods such as the JacobiDavidson, Arnoldi or the rational Krylov method and analyze their properties. We briefly introduce a new linearization technique and demonstrate how it can be used to improve structure preservation and with this the accuracy and efficiency of linearization based methods. We present several recent applications where structured and unstructured nonlinear eigenvalue problems arise and some numerical results.
A comparison of eigensolvers for largescale 3D modal analysis using AMGpreconditioned iterative methods
 Int. J. Numer. Meth. Engng
, 2005
"... The goal of our paper is to compare a number of algorithms for computing a large number of eigenvectors of the generalized symmetric eigenvalue problem arising from a modal analysis of elastic structures. The shiftinvert Lanczos algorithm has emerged as the workhorse for the solution of this genera ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
The goal of our paper is to compare a number of algorithms for computing a large number of eigenvectors of the generalized symmetric eigenvalue problem arising from a modal analysis of elastic structures. The shiftinvert Lanczos algorithm has emerged as the workhorse for the solution of this generalized eigenvalue problem; however a sparse direct factorization is required for the resulting set of linear equations. Instead, our paper considers the use of preconditioned iterative methods. We present a brief review of available preconditioned eigensolvers followed by a numerical comparison on three problems using a scalable algebraic multigrid (AMG) preconditioner. Copyright c ○ 2003 John Wiley & Sons, Ltd. key words: Eigenvalues, large sparse symmetric eigenvalue problems, modal analysis, algebraic
Using Generalized Cayley Transformations Within An Inexact Rational Krylov Sequence Method
 SIAM J. MATRIX ANAL. APPL
"... The rational Krylov sequence (RKS) method is a generalization of Arnoldi's method. It constructs an orthogonal reduction of a matrix pencil into an upper Hessenberg pencil. The RKS method is useful when the matrix pencil may be efficiently factored. This article considers approximately solving the r ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
The rational Krylov sequence (RKS) method is a generalization of Arnoldi's method. It constructs an orthogonal reduction of a matrix pencil into an upper Hessenberg pencil. The RKS method is useful when the matrix pencil may be efficiently factored. This article considers approximately solving the resulting linear systems with iterative methods. We show that a Cayley transformation leads to a more efficient and robust eigensolver than the usual shiftinvert transformation when the linear systems are solved inexactly within the RKS method. A relationship with the recently introduced JacobiDavidson method is also established.