Results 1  10
of
18
Nonlinear dimensionality reduction by locally linear embedding
 SCIENCE
, 2000
"... Many areas of science ..."
Krylov Subspace Techniques for ReducedOrder Modeling of Nonlinear Dynamical Systems
 Appl. Numer. Math
, 2002
"... Means of applying Krylov subspace techniques for adaptively extracting accurate reducedorder models of largescale nonlinear dynamical systems is a relatively open problem. There has been much current interest in developing such techniques. We focus on a bilinearization method, which extends Kry ..."
Abstract

Cited by 50 (3 self)
 Add to MetaCart
Means of applying Krylov subspace techniques for adaptively extracting accurate reducedorder models of largescale nonlinear dynamical systems is a relatively open problem. There has been much current interest in developing such techniques. We focus on a bilinearization method, which extends Krylov subspace techniques for linear systems. In this approach, the nonlinear system is first approximated by a bilinear system through Carleman bilinearization. Then a reducedorder bilinear system is constructed in such a way that it matches certain number of multimoments corresponding to the first few kernels of the VolterraWiener representation of the bilinear system. It is shown that the twosided Krylov subspace technique matches significant more number of multimoments than the corresponding oneside technique.
A geometric theory for preconditioned inverse iteration III: A short and sharp convergence estimate for generalized eigenvalue problems
, 2003
"... In two previous papers by Neymeyr [Linear Algebra Appl. 322 (13) (2001) 61; 322 (1 3) (2001) 87], a sharp, but cumbersome, convergence rate estimate was proved for a simple preconditioned eigensolver, which computes the smallest eigenvalue together with the corresponding eigenvector of a symmetr ..."
Abstract

Cited by 33 (8 self)
 Add to MetaCart
In two previous papers by Neymeyr [Linear Algebra Appl. 322 (13) (2001) 61; 322 (1 3) (2001) 87], a sharp, but cumbersome, convergence rate estimate was proved for a simple preconditioned eigensolver, which computes the smallest eigenvalue together with the corresponding eigenvector of a symmetric positive definite matrix, using a preconditioned gradient minimization of the Rayleigh quotient. In the present paper, we discover and prove a much shorter and more elegant (but still sharp in decisive quantities) convergence rate estimate of the same method that also holds for a generalized symmetric definite eigenvalue problem. The new estimate is simple enough to stimulate a search for a more straightforward proof technique that could be helpful to investigate such a practically important method as the locally optimal block preconditioned conjugate gradient eigensolver.
Tits. NewtonKKT interiorpoint methods for indefinite quadratic programming
 Comput. Optim. Appl
"... Two interiorpoint algorithms are proposed and analyzed, for the (local) solution of (possibly) indefinite quadratic programming problems. They are of the NewtonKKT variety in that (much like in the case of primaldual algorithms for linear programming) search directions for the “primal ” variables ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Two interiorpoint algorithms are proposed and analyzed, for the (local) solution of (possibly) indefinite quadratic programming problems. They are of the NewtonKKT variety in that (much like in the case of primaldual algorithms for linear programming) search directions for the “primal ” variables and the KarushKuhnTucker (KKT) multiplier estimates are components of the Newton (or quasiNewton)
A twodirectional Arnoldi process and its application to parametric . . .
 JOURNAL OF COMPUTATIONAL AND APPLIED
, 2009
"... ..."
JADAMILU: a software code for computing selected eigenvalues of large sparse symmetric matrices
, 2007
"... ..."
Bounds for Eigenvalues of Matrix Polynomials
 Lin. Alg. Appl
, 2001
"... Upper and lower bounds are derived for the absolute values of the eigenvalues of a matrix polynomial (or matrix). The bounds are based on norms of the coecient matrices and involve the inverses of the leading and trailing coefficient matrices. They generalize various existing bounds for scalar poly ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Upper and lower bounds are derived for the absolute values of the eigenvalues of a matrix polynomial (or matrix). The bounds are based on norms of the coecient matrices and involve the inverses of the leading and trailing coefficient matrices. They generalize various existing bounds for scalar polynomials and single matrices. A variety of tools are used in the derivations, including block companion matrices, Gershgorin's theorem, the numerical radius, and associated scalar polynomials. Numerical experiments show that the bounds can be surprisingly sharp on practical problems.
A ChebyshevDavidson algorithm for large symmetric eigenvalue problems
"... A polynomial filtered Davidsontype algorithm is proposed for solving symmetric eigenproblems. The correctionequation of the Davidson approach is replaced by a polynomial filtering step. The new approach has better global convergence and robustness properties when compared with standard Davidsonty ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
A polynomial filtered Davidsontype algorithm is proposed for solving symmetric eigenproblems. The correctionequation of the Davidson approach is replaced by a polynomial filtering step. The new approach has better global convergence and robustness properties when compared with standard Davidsontype methods. A typical filter, one that is used in this paper, is based on Chebyshev polynomials. The goal of the polynomial filter is to amplify components of the desired eigenvectors in the subspace, which has the effect of reducing the number of steps required for convergence and the cost resulting from orthogonalizations and restarts. Comparisons with JDQR, JDCG and LOBPCG methods are presented, as well as comparisons with the wellknown ARPACK package. Key words. Polynomial filter, Davidsontype method, global convergence, Krylov subspace, correctionequation, eigenproblem. AMS subject classifications. 15A18, 15A23, 15A90, 65F15, 65F25, 65F50
Parallel, Multigrain Iterative Solvers for Hiding Network Latencies on MPP's and Networks of Clusters
, 2003
"... Parallel iterative solvers are often the only means of solving large linear systems and eigenproblems. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Parallel iterative solvers are often the only means of solving large linear systems and eigenproblems.
Performance evaluation of eigensolvers in nanostructure computations
 In Proc. IEEE/ACM HPCNano05 Workshop
, 2006
"... Abstract — We are concerned with the computation of electronic and optical properties of quantum dots. Using the Energy SCAN (ESCAN) method with empirical pseudopotentials, we compute interior eigenstates around the band gap which determine their properties. Numerically, this interior Hermitian eige ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract — We are concerned with the computation of electronic and optical properties of quantum dots. Using the Energy SCAN (ESCAN) method with empirical pseudopotentials, we compute interior eigenstates around the band gap which determine their properties. Numerically, this interior Hermitian eigenvalue problem poses several challenges, both with respect to accuracy and efficiency. Using these criteria, we evaluate several stateofthe art preconditioned iterative eigensolvers on a range of CdSe quantum dots of various sizes. All the iterative eigensolvers are seeking for the minimal eigenvalues of the folded operator with reference shift in the bandgap. The tested methods include standard ConjugateGradient (CG)based RayleighQuotient minimization, Locally Optimal BlockPreconditioned CG (LOBPCG) and two variants of the Jacobi Davidson method: JDQMR and GD+1. Our experimental results conclude that the Jacobi Davidson method is often faster than the CG based method. I.