Results 1 
9 of
9
An Evaluation of Software for Computing Eigenvalues of Sparse Nonsymmetric Matrices
, 1996
"... The past few years have seen a significant increase in research into numerical methods for computing selected eigenvalues of large sparse nonsymmetric matrices. This research has begun to lead to the development of highquality mathematical software. The software includes codes that implement su ..."
Abstract

Cited by 31 (6 self)
 Add to MetaCart
The past few years have seen a significant increase in research into numerical methods for computing selected eigenvalues of large sparse nonsymmetric matrices. This research has begun to lead to the development of highquality mathematical software. The software includes codes that implement subspace iteration methods, Arnoldibased algorithms, and nonsymmetric Lanczos methods. The aim of the current study is to evaluate this stateoftheart software. In this study we consider subspace iteration and Arnoldi codes. We look at the key features of the codes and their ease of use. Then, using a wide range of test problems, we compare the performance of the codes in terms of storage requirements, execution times, accuracy, and reliability. We also consider their suitability for solving largescale industrial problems. Based on
On ApproximateInverse Preconditioners
, 1995
"... We investigate the use of sparse approximateinverse preconditioners for the iterative solution of unsymmetric linear systems of equations. Such methods are of particular interest because of the considerable scope for parallelization. We propose a number of enhancements which may improve their perfo ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
We investigate the use of sparse approximateinverse preconditioners for the iterative solution of unsymmetric linear systems of equations. Such methods are of particular interest because of the considerable scope for parallelization. We propose a number of enhancements which may improve their performance. When run in a sequential environment, these methods can perform unfavourably when compared with other techniques. However, they can be successful when other methods fail and simulations indicate that they can be competitive when considered in a parallel environment. 1 Current reports available by anonymous ftp from joyousgard.cc.rl.ac.uk (internet 130.246.9.91) in the directory "pub/reports". Computing and Information Systems Department, Atlas Centre, Rutherford Appleton Laboratory, Oxfordshire OX11 0QX, England. June 23, 1995. 1 INTRODUCTION 1 1 Introduction Suppose that A is a real n by n unsymmetric matrix, whose columns are a j , 1 j n. We are principally concerned wit...
Implicitly restarted Arnoldi methods and eigenvalues of the discretized Navier Stokes equations.
 SIAM J. Matrix Anal. Appl
, 1997
"... We are concerned with finding a few eigenvalues of the large sparse nonsymmetric generalized eigenvalue problem Ax = Bx that arises in stability studies of incompressible fluid flow. The matrices have a block structure that is typical of mixed finiteelement discretizations for such problems. We ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
We are concerned with finding a few eigenvalues of the large sparse nonsymmetric generalized eigenvalue problem Ax = Bx that arises in stability studies of incompressible fluid flow. The matrices have a block structure that is typical of mixed finiteelement discretizations for such problems. We examine the use of shiftinvert and Cayley transformations in conjunction with the implicitly restarted Arnoldi method along with using a semiinner product induced by B and purification techniques. Numerical results are presented for some model problems arising from the ENTWIFE finiteelement package. Our conclusion is that, with careful implementation, implicitly restarted Arnoldi methods are reliable for linear stability analysis. AMS classification: Primary 65F15; Secondary 65F50 Key Words: eigenvalues, sparse nonsymmetric matrices, Arnoldi's method. 1 Introduction Mixed finiteelement discretizations of timedependent equations modelling incompressible fluid flow problems ty...
A PrimalDual Algorithm for Minimizing a NonConvex Function Subject to Bound and Linear Equality Constraints
, 1996
"... A new primaldual algorithm is proposed for the minimization of nonconvex objective functions subject to simple bounds and linear equality constraints. The method alternates between a classical primaldual step and a Newtonlike step in order to ensure descent on a suitable merit function. Converge ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
A new primaldual algorithm is proposed for the minimization of nonconvex objective functions subject to simple bounds and linear equality constraints. The method alternates between a classical primaldual step and a Newtonlike step in order to ensure descent on a suitable merit function. Convergence of a welldefined subsequence of iterates is proved from arbitrary starting points. Algorithmic variants are discussed and preliminary numerical results presented. 1 IBM T.J. Watson Research Center, P.O.Box 218, Yorktown Heights, NY 10598, USA Email : arconn@watson.ibm.com 2 Department for Computation and Information, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England, EU Email : nimg@letterbox.rl.ac.uk 3 Current reports available by anonymous ftp from joyousgard.cc.rl.ac.uk (internet 130.246.9.91) in the directory "pub/reports". 4 Department of Mathematics, Facult'es Universitaires ND de la Paix, 61, rue de Bruxelles, B5000 Namur, Belgium, EU Email : pht@ma...
Performance Issues for Frontal Schemes on a CacheBased High Performance Computer
, 1997
"... We consider the implementation of a frontal code for the solution of large sparse unsymmetric linear systems on a high performance computer where data must be in the cache before arithmetic operations can be performed on it. In particular, we show how we can modify the frontal solution algorithm to ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
We consider the implementation of a frontal code for the solution of large sparse unsymmetric linear systems on a high performance computer where data must be in the cache before arithmetic operations can be performed on it. In particular, we show how we can modify the frontal solution algorithm to enhance the proportion of arithmetic operations performed using Level 3 BLAS thus enabling better reuse of data in the cache. We illustrate the effects of this on Silicon Graphics Power Challenge machines using problems which arise in real engineering and industrial applications. Keywords: unsymmetric sparse matrices, frontal solver, direct methods, finiteelements, BLAS, computational kernels. AMS(MOS) subject classifications: 65F05, 65F50. 1 Current reports available by anonymous ftp from matisa.cc.rl.ac.uk in the directory pub/reports. This report is in file cdsRAL97001.ps.gz. 2 Address: AEA Technology, Harwell, Didcot, Oxon OX11 0RA, England. Department for Computation and Informa...
Iterative Methods for IllConditioned Linear Systems From Optimization
, 1998
"... Preconditioned conjugategradient methods are proposed for solving the illconditioned linear systems which arise in penalty and barrier methods for nonlinear minimization. The preconditioners are chosen so as to isolate the dominant cause of ill conditioning. The methods are stablized using a restr ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Preconditioned conjugategradient methods are proposed for solving the illconditioned linear systems which arise in penalty and barrier methods for nonlinear minimization. The preconditioners are chosen so as to isolate the dominant cause of ill conditioning. The methods are stablized using a restricted form of iterative refinement. Numerical results illustrate the approaches considered. 1 Email : n.gould@rl.ac.uk 2 Current reports available from "http://www.rl.ac.uk/departments/ccd/numerical/reports/reports.html". Department for Computation and Information Atlas Centre Rutherford Appleton Laboratory Oxfordshire OX11 0QX August 26, 1998. 1 INTRODUCTION 1 1 Introduction Let A and H be, respectively, fullrank m by n (m n) and symmetric n by n real matrices. Suppose furthermore that any nonzero coefficients in this data are modest, that is the data is O(1). (1) We consider the iterative solution of the linear system (H +A T D \Gamma1 A)x = b (1.1) where b is modest an...
MA62  A frontal code for sparse positivedefinite symmetric systems from finiteelement applications.
, 1997
"... We describe the design, implementation, and performance of a frontal code for the solution of large sparse symmetric systems of linear finiteelement equations. The code is intended primarily for positivedefinite systems since numerical pivoting is not performed. The resulting software package, MA6 ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
We describe the design, implementation, and performance of a frontal code for the solution of large sparse symmetric systems of linear finiteelement equations. The code is intended primarily for positivedefinite systems since numerical pivoting is not performed. The resulting software package, MA62, will be included in Release 13 of the Harwell Subroutine Library (HSL). We illustrate the performance of our new code on a range of problems arising from real engineering and industrial applications. The performance of the code is compared with that of the HSL general frontal solver MA42 and with other positivedefinite codes from the Harwell Subroutine Library. Keywords: sparse symmetric linear equations, symmetric frontal method, Gaussian elimination, finiteelement equations, Level 3 BLAS. AMS(MSC 1991) subject classifications: 65F05, 65F50. Running title: Symmetric frontal code Current reports available by anonymous ftp from matisa.cc.rl.ac.uk (internet 130.246.8.22) in the direct...
Numerical experience with a reduced Hessian method for largescale constrained optimization
 Research Report (in preparation), EE and CS, Northwestern
, 1993
"... The reduced Hessian SQP algorithm presented in [2] is developed in this paper into a practical method for largescale optimization. The novelty of the algorithm lies in the incorporation of a correction vector that approximates the cross term Z T WYp Y. This improves the stability and robustness of ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The reduced Hessian SQP algorithm presented in [2] is developed in this paper into a practical method for largescale optimization. The novelty of the algorithm lies in the incorporation of a correction vector that approximates the cross term Z T WYp Y. This improves the stability and robustness of the algorithm without increasing its computational cost. The paper studies how to implement the algorithm e ciently, and presents a set of tests illustrating its numerical performance. An analytic example, showing the bene ts of the correction term, is also presented.