Results 1  10
of
29
Nonlinear dimensionality reduction by locally linear embedding
 SCIENCE
, 2000
"... Many areas of science ..."
Krylov Subspace Techniques for ReducedOrder Modeling of Nonlinear Dynamical Systems
 Appl. Numer. Math
, 2002
"... Means of applying Krylov subspace techniques for adaptively extracting accurate reducedorder models of largescale nonlinear dynamical systems is a relatively open problem. There has been much current interest in developing such techniques. We focus on a bilinearization method, which extends Kry ..."
Abstract

Cited by 51 (3 self)
 Add to MetaCart
Means of applying Krylov subspace techniques for adaptively extracting accurate reducedorder models of largescale nonlinear dynamical systems is a relatively open problem. There has been much current interest in developing such techniques. We focus on a bilinearization method, which extends Krylov subspace techniques for linear systems. In this approach, the nonlinear system is first approximated by a bilinear system through Carleman bilinearization. Then a reducedorder bilinear system is constructed in such a way that it matches certain number of multimoments corresponding to the first few kernels of the VolterraWiener representation of the bilinear system. It is shown that the twosided Krylov subspace technique matches significant more number of multimoments than the corresponding oneside technique.
A geometric theory for preconditioned inverse iteration III: A short and sharp convergence estimate for generalized eigenvalue problems
, 2003
"... In two previous papers by Neymeyr [Linear Algebra Appl. 322 (13) (2001) 61; 322 (1 3) (2001) 87], a sharp, but cumbersome, convergence rate estimate was proved for a simple preconditioned eigensolver, which computes the smallest eigenvalue together with the corresponding eigenvector of a symmetr ..."
Abstract

Cited by 35 (8 self)
 Add to MetaCart
In two previous papers by Neymeyr [Linear Algebra Appl. 322 (13) (2001) 61; 322 (1 3) (2001) 87], a sharp, but cumbersome, convergence rate estimate was proved for a simple preconditioned eigensolver, which computes the smallest eigenvalue together with the corresponding eigenvector of a symmetric positive definite matrix, using a preconditioned gradient minimization of the Rayleigh quotient. In the present paper, we discover and prove a much shorter and more elegant (but still sharp in decisive quantities) convergence rate estimate of the same method that also holds for a generalized symmetric definite eigenvalue problem. The new estimate is simple enough to stimulate a search for a more straightforward proof technique that could be helpful to investigate such a practically important method as the locally optimal block preconditioned conjugate gradient eigensolver.
A twodirectional Arnoldi process and its application to parametric . . .
 JOURNAL OF COMPUTATIONAL AND APPLIED
, 2009
"... ..."
Tits. NewtonKKT interiorpoint methods for indefinite quadratic programming
 Comput. Optim. Appl
"... Two interiorpoint algorithms are proposed and analyzed, for the (local) solution of (possibly) indefinite quadratic programming problems. They are of the NewtonKKT variety in that (much like in the case of primaldual algorithms for linear programming) search directions for the “primal ” variables ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Two interiorpoint algorithms are proposed and analyzed, for the (local) solution of (possibly) indefinite quadratic programming problems. They are of the NewtonKKT variety in that (much like in the case of primaldual algorithms for linear programming) search directions for the “primal ” variables and the KarushKuhnTucker (KKT) multiplier estimates are components of the Newton (or quasiNewton)
A ChebyshevDavidson algorithm for large symmetric eigenvalue problems
"... A polynomial filtered Davidsontype algorithm is proposed for solving symmetric eigenproblems. The correctionequation of the Davidson approach is replaced by a polynomial filtering step. The new approach has better global convergence and robustness properties when compared with standard Davidsonty ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
A polynomial filtered Davidsontype algorithm is proposed for solving symmetric eigenproblems. The correctionequation of the Davidson approach is replaced by a polynomial filtering step. The new approach has better global convergence and robustness properties when compared with standard Davidsontype methods. A typical filter, one that is used in this paper, is based on Chebyshev polynomials. The goal of the polynomial filter is to amplify components of the desired eigenvectors in the subspace, which has the effect of reducing the number of steps required for convergence and the cost resulting from orthogonalizations and restarts. Comparisons with JDQR, JDCG and LOBPCG methods are presented, as well as comparisons with the wellknown ARPACK package. Key words. Polynomial filter, Davidsontype method, global convergence, Krylov subspace, correctionequation, eigenproblem. AMS subject classifications. 15A18, 15A23, 15A90, 65F15, 65F25, 65F50
Bounds for Eigenvalues of Matrix Polynomials
 Lin. Alg. Appl
, 2001
"... Upper and lower bounds are derived for the absolute values of the eigenvalues of a matrix polynomial (or matrix). The bounds are based on norms of the coecient matrices and involve the inverses of the leading and trailing coefficient matrices. They generalize various existing bounds for scalar poly ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Upper and lower bounds are derived for the absolute values of the eigenvalues of a matrix polynomial (or matrix). The bounds are based on norms of the coecient matrices and involve the inverses of the leading and trailing coefficient matrices. They generalize various existing bounds for scalar polynomials and single matrices. A variety of tools are used in the derivations, including block companion matrices, Gershgorin's theorem, the numerical radius, and associated scalar polynomials. Numerical experiments show that the bounds can be surprisingly sharp on practical problems.
JADAMILU: a software code for computing selected eigenvalues of large sparse symmetric matrices
, 2007
"... ..."
EFFICIENT PRECONDITIONED INNER SOLVES FOR INEXACT RAYLEIGH QUOTIENT ITERATION AND THEIR CONNECTIONS TO THE SINGLEVECTOR JACOBI–DAVIDSON METHOD *
"... Abstract. We study inexact Rayleigh quotient iteration (IRQI) for computing a simple interior eigenpair of the generalized eigenvalue problem Av λBv, providing new insights into a special type of preconditioners with “tuning ” for the efficient iterative solution of the shifted linear systems that ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. We study inexact Rayleigh quotient iteration (IRQI) for computing a simple interior eigenpair of the generalized eigenvalue problem Av λBv, providing new insights into a special type of preconditioners with “tuning ” for the efficient iterative solution of the shifted linear systems that arise in this algorithm. We first give a new convergence analysis of IRQI, showing that locally cubic and quadratic convergence can be achieved for Hermitian and nonHermitian problems, respectively, if the shifted linear systems are solved by a generic Krylov subspace method with a tuned preconditioner to a reasonably small fixed tolerance. We then refine the study by Freitag and Spence [Linear Algebra Appl., 428 (2008), pp. 2049–2060] on the equivalence of the inner solves of IRQI and singlevector Jacobi–Davidson method where a full orthogonalization method with a tuned preconditioner is used as the inner solver. We also provide some new perspectives on the tuning strategy, showing that tuning is essentially needed only in the first inner iteration in the nonHermitian case. Based on this observation, we propose a flexible GMRES algorithm with a special configuration in the first inner step, and show that this method is as efficient as GMRES with the tuned preconditioner.
Performance evaluation of eigensolvers in nanostructure computations
 In Proc. IEEE/ACM HPCNano05 Workshop
, 2006
"... Abstract — We are concerned with the computation of electronic and optical properties of quantum dots. Using the Energy SCAN (ESCAN) method with empirical pseudopotentials, we compute interior eigenstates around the band gap which determine their properties. Numerically, this interior Hermitian eige ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract — We are concerned with the computation of electronic and optical properties of quantum dots. Using the Energy SCAN (ESCAN) method with empirical pseudopotentials, we compute interior eigenstates around the band gap which determine their properties. Numerically, this interior Hermitian eigenvalue problem poses several challenges, both with respect to accuracy and efficiency. Using these criteria, we evaluate several stateofthe art preconditioned iterative eigensolvers on a range of CdSe quantum dots of various sizes. All the iterative eigensolvers are seeking for the minimal eigenvalues of the folded operator with reference shift in the bandgap. The tested methods include standard ConjugateGradient (CG)based RayleighQuotient minimization, Locally Optimal BlockPreconditioned CG (LOBPCG) and two variants of the Jacobi Davidson method: JDQMR and GD+1. Our experimental results conclude that the Jacobi Davidson method is often faster than the CG based method. I.