Results 1  10
of
17
Lanczostype solvers for nonsymmetric linear systems of equations
 Acta Numer
, 1997
"... Among the iterative methods for solving large linear systems with a sparse (or, possibly, structured) nonsymmetric matrix, those that are based on the Lanczos process feature short recurrences for the generation of the Krylov space. This means low cost and low memory requirement. This review article ..."
Abstract

Cited by 37 (11 self)
 Add to MetaCart
Among the iterative methods for solving large linear systems with a sparse (or, possibly, structured) nonsymmetric matrix, those that are based on the Lanczos process feature short recurrences for the generation of the Krylov space. This means low cost and low memory requirement. This review article introduces the reader not only to the basic forms of the Lanczos process and some of the related theory, but also describes in detail a number of solvers that are based on it, including those that are considered to be the most efficient ones. Possible breakdowns of the algorithms and ways to cure them by lookahead are also discussed. www.DownloadPaper.ir
Automatic preconditioning by limited memory QuasiNewton updating
 SIAM J. Optim
"... The paper proposes a preconditioner for the conjugate gradient method (CG) that is designed for solving systems of equations Ax = bi with di erent right hand side vectors, or for solving a sequence of slowly varying systems Akx = bk. The preconditioner has the form of a limited memory quasiNewton m ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
The paper proposes a preconditioner for the conjugate gradient method (CG) that is designed for solving systems of equations Ax = bi with di erent right hand side vectors, or for solving a sequence of slowly varying systems Akx = bk. The preconditioner has the form of a limited memory quasiNewton matrix and is generated using information from the CG iteration. The automatic preconditioner does not require explicit knowledge of the coe cient matrix A and is therefore suitable for problems where only products of A times avector can be computed. Numerical experiments indicate that the preconditioner has most to o er when these matrixvector products are expensive to compute, and when low accuracy in the solution is required. The e ectiveness of the preconditioner is tested within a Hessianfree Newton method for optimization, and by solving certain linear systems arising in nite element models.
CRPC Research into Linear Algebra Software for High Performance Computers
, 1994
"... In this paper we look at a number of approaches being investigated in the Center for Research on Parallel Computation (CRPC) to develop linear algebra software for highperformance computers. These approaches are exemplified by the LAPACK, templates, and ARPACK projects. LAPACK is a software library ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
In this paper we look at a number of approaches being investigated in the Center for Research on Parallel Computation (CRPC) to develop linear algebra software for highperformance computers. These approaches are exemplified by the LAPACK, templates, and ARPACK projects. LAPACK is a software library for performing dense and banded linear algebra computations, and was designed to run efficiently on high performance computers. We focus on the design of the distributed memory version of LAPACK, and on an objectoriented interface to LAPACK. The templates project aims at making the task of developing sparse linear algebra software simpler and easier. Reusable software templates are provided that the user can then customize to modify and optimize a particular algorithm, and hence build a more complex applications. ARPACK is a software package for solving large scale eigenvalue problems, and is based on an implicitly restarted variant of the Arnoldi scheme. The paper focuses on issues impact...
Analyzing Data Structures for Parallel Sparse Direct Solvers: Pivoting And FillIn
 TO 168, PROCEEDINGS OF THE SIXTH WORKSHOP ON COMPILERS FOR PARALLEL COMPUTERS, CPC'96, SPECIAL ISSUE OF THE VOLUME KONFERENZEN DES FORSCHUNGSZENTRUMS JULICH, VOL.21
, 1996
"... This paper addresses the problem of the parallelization of sparse direct methods for the solution of linear systems in distributed memory multiprocessors. Sparse direct solvers include pivoting operations and suffer from fillin, problems that turn the efficient parallelization into a challenging ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
This paper addresses the problem of the parallelization of sparse direct methods for the solution of linear systems in distributed memory multiprocessors. Sparse direct solvers include pivoting operations and suffer from fillin, problems that turn the efficient parallelization into a challenging task. We present some data structures to store the sparse matrices that permit to deal in a efficient way with both problems. These data structures have been evaluated on a Cray T3D, implementing, in particular, LU and QR factorizations as examples of direct solvers. Any of the data representations considered enforces the handling of indirections for data accesses, pointer referencing and dynamic data creation. All of
Serial and Parallel Krylov Methods for Implicit Finite Difference Schemes Arising in Multivariate Option Pricing
, 2001
"... This paper investigates computational and implementation issues for the valuation of options on three underlying assets, focusing on the use of the finite difference methods. We demonstrate that implicit methods, which have good convergence and stability properties, can now be implemented efficientl ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper investigates computational and implementation issues for the valuation of options on three underlying assets, focusing on the use of the finite difference methods. We demonstrate that implicit methods, which have good convergence and stability properties, can now be implemented efficiently due to the recent development of techniques that allow the efficient solution of large and sparse linear systems. In the trivariate option valuation problem, we use nonstationary iterative methods (also called Krylov methods) for the solution of the large and sparse linear systems arising while using implicit methods. Krylov methods are investigated both in serial and in parallel implementations. Computational results show that the parallel implementation is particularly efficient if a fine grid space is needed.
ORIGINAL PAPER
"... An XFEM/level set approach to modelling surface/interface effects and to computing the sizedependent effective properties of nanocomposites ..."
Abstract
 Add to MetaCart
(Show Context)
An XFEM/level set approach to modelling surface/interface effects and to computing the sizedependent effective properties of nanocomposites
proposed running head: Nutrient Consumption in Biofilms corresponding author:
"... The final mauscript will be submitted in TeX format ..."