Results 1  10
of
10
Automatic preconditioning by limited memory QuasiNewton updating
 SIAM J. Optim
"... The paper proposes a preconditioner for the conjugate gradient method (CG) that is designed for solving systems of equations Ax = bi with di erent right hand side vectors, or for solving a sequence of slowly varying systems Akx = bk. The preconditioner has the form of a limited memory quasiNewton m ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
The paper proposes a preconditioner for the conjugate gradient method (CG) that is designed for solving systems of equations Ax = bi with di erent right hand side vectors, or for solving a sequence of slowly varying systems Akx = bk. The preconditioner has the form of a limited memory quasiNewton matrix and is generated using information from the CG iteration. The automatic preconditioner does not require explicit knowledge of the coe cient matrix A and is therefore suitable for problems where only products of A times avector can be computed. Numerical experiments indicate that the preconditioner has most to o er when these matrixvector products are expensive to compute, and when low accuracy in the solution is required. The e ectiveness of the preconditioner is tested within a Hessianfree Newton method for optimization, and by solving certain linear systems arising in nite element models.
CRPC Research into Linear Algebra Software for High Performance Computers
, 1994
"... In this paper we look at a number of approaches being investigated in the Center for Research on Parallel Computation (CRPC) to develop linear algebra software for highperformance computers. These approaches are exemplified by the LAPACK, templates, and ARPACK projects. LAPACK is a software library ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
In this paper we look at a number of approaches being investigated in the Center for Research on Parallel Computation (CRPC) to develop linear algebra software for highperformance computers. These approaches are exemplified by the LAPACK, templates, and ARPACK projects. LAPACK is a software library for performing dense and banded linear algebra computations, and was designed to run efficiently on high performance computers. We focus on the design of the distributed memory version of LAPACK, and on an objectoriented interface to LAPACK. The templates project aims at making the task of developing sparse linear algebra software simpler and easier. Reusable software templates are provided that the user can then customize to modify and optimize a particular algorithm, and hence build a more complex applications. ARPACK is a software package for solving large scale eigenvalue problems, and is based on an implicitly restarted variant of the Arnoldi scheme. The paper focuses on issues impact...
Serial and Parallel Krylov Methods for Implicit Finite Difference Schemes Arising in Multivariate Option Pricing
, 2001
"... This paper investigates computational and implementation issues for the valuation of options on three underlying assets, focusing on the use of the finite difference methods. We demonstrate that implicit methods, which have good convergence and stability properties, can now be implemented efficientl ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper investigates computational and implementation issues for the valuation of options on three underlying assets, focusing on the use of the finite difference methods. We demonstrate that implicit methods, which have good convergence and stability properties, can now be implemented efficiently due to the recent development of techniques that allow the efficient solution of large and sparse linear systems. In the trivariate option valuation problem, we use nonstationary iterative methods (also called Krylov methods) for the solution of the large and sparse linear systems arising while using implicit methods. Krylov methods are investigated both in serial and in parallel implementations. Computational results show that the parallel implementation is particularly efficient if a fine grid space is needed.
Chapter XI
"... Introduction In theory and practical applications one may encounter eigenproblems that are more complicated than the standard and generalized eigenproblems discussed in previous chapters. In this chapter we will discuss a particular class of eigenproblems: polynomial eigenproblems, with focus on th ..."
Abstract
 Add to MetaCart
Introduction In theory and practical applications one may encounter eigenproblems that are more complicated than the standard and generalized eigenproblems discussed in previous chapters. In this chapter we will discuss a particular class of eigenproblems: polynomial eigenproblems, with focus on the quadratic case. Furthermore, we will discuss briefly a socalled constrained eigenproblem. In Section 56 we will pay attention to the important class of quadratic eigenproblems, with a small sidestep to higher order polynomial eigenproblems. These quadratic eigenproblems are of the form M + C + K)x = 0; (55.1) where M , C, and K are given square matrices of order n. Solutions ; x, with a scalar and x 6= 0 an nvector, are the eigenvalues and eigenvectors of the given problem. There are three basic approaches for the solution of a quadratic eigenproblem: ffl Rewrite the problem as a generalized eigenvalue problem of order 2n, see Section 56. A drawback of this approach is that the di
multithreading for memory intensive applications
"... Exploring the performance limits of simultaneous ..."
2. Specification
"... nag sparse sym chol sol (f11jcc) solves a real sparse symmetric system of linear equations, represented in symmetric coordinate storage format, using a conjugate gradient or Lanczos method, with incomplete Cholesky preconditioning. ..."
Abstract
 Add to MetaCart
nag sparse sym chol sol (f11jcc) solves a real sparse symmetric system of linear equations, represented in symmetric coordinate storage format, using a conjugate gradient or Lanczos method, with incomplete Cholesky preconditioning.