Results 1  10
of
17
Continuation and Path Following
, 1992
"... CONTENTS 1 Introduction 1 2 The Basics of PredictorCorrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 PiecewiseLinear Methods 34 6 Complexity 41 7 Available Software 44 References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful ..."
Abstract

Cited by 70 (6 self)
 Add to MetaCart
CONTENTS 1 Introduction 1 2 The Basics of PredictorCorrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 PiecewiseLinear Methods 34 6 Complexity 41 7 Available Software 44 References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful theoretical tools in modern mathematics. Their use can be traced back at least to such venerated works as those of Poincar'e (18811886), Klein (1882 1883) and Bernstein (1910). Leray and Schauder (1934) refined the tool and presented it as a global result in topology, viz., the homotopy invariance of degree. The use of deformations to solve nonlinear systems of equations Partially supported by the National Science Foundation via grant # DMS9104058 y Preprint, Colorado State University, August 2 E. Allgower and K. Georg may be traced back at least to Lahaye (1934). The classical embedding methods were the
A Parallelizable Eigensolver for Real Diagonalizable Matrices with Real Eigenvalues
, 1991
"... . In this paper, preliminary research results on a new algorithm for finding all the eigenvalues and eigenvectors of a real diagonalizable matrix with real eigenvalues are presented. The basic mathematical theory behind this approach is reviewed and is followed by a discussion of the numerical consi ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
. In this paper, preliminary research results on a new algorithm for finding all the eigenvalues and eigenvectors of a real diagonalizable matrix with real eigenvalues are presented. The basic mathematical theory behind this approach is reviewed and is followed by a discussion of the numerical considerations of the actual implementation. The numerical algorithm has been tested on thousands of matrices on both a Cray2 and an IBM RS/6000 Model 580 workstation. The results of these tests are presented. Finally, issues concerning the parallel implementation of the algorithm are discussed. The algorithm's heavy reliance on matrixmatrix multiplication, coupled with the divide and conquer nature of this algorithm, should yield a highly parallelizable algorithm. 1. Introduction. Computation of all the eigenvalues and eigenvectors of a dense matrix is essential for solving problems in many fields. The everincreasing computational power available from modern supercomputers offers the potenti...
A Serial Implementation of Cuppen's Divide and Conquer Algorithm for the Symmetric Eigenvalue Problem
, 1994
"... This report discusses a serial implementation of Cuppen's divide and conquer algorithm for computing all eigenvalues and eigenvectors of a real symmetric matrix T = Q Q T. This method is compared with the LAPACK implementations of QR, bisection/inverse iteration, and rootfree QR/inverse iteration t ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
This report discusses a serial implementation of Cuppen's divide and conquer algorithm for computing all eigenvalues and eigenvectors of a real symmetric matrix T = Q Q T. This method is compared with the LAPACK implementations of QR, bisection/inverse iteration, and rootfree QR/inverse iteration to nd all of the eigenvalues and eigenvectors. On a DEC Alpha using optimized Basic Linear Algebra Subroutines (BLAS), divide and conquer was uniformly the fastest algorithm by a large margin for large tridiagonal eigenproblems. When Fortran BLAS were used, bisection/inverse iteration was somewhat faster (up to a factor of 2) for very large matrices (n 500) without clustered eigenvalues. When eigenvalues were clustered, divide and conquer was up to 80 times faster. The speedups over QR were so large in the tridiagonal case that the overall problem, including reduction to tridiagonal form, sped up by a factor of 2.5 over QR for n 500. Nearly universally, the matrix of eigenvectors generated by divide and con
Laguerre's Iteration In Solving The Symmetric Tridiagonal Eigenproblem  Revisited
 SIAM J. Sci. Comput
, 1992
"... . In this paper we present an algorithm for the eigenvalue problem of symmetric tridiagonal matrices. Our algorithm employs the determinant evaluation, splitandmerge strategy and Laguerre's iteration. The method directly evaluates eigenvalues and uses inverse iteration as an option when eigenvecto ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
. In this paper we present an algorithm for the eigenvalue problem of symmetric tridiagonal matrices. Our algorithm employs the determinant evaluation, splitandmerge strategy and Laguerre's iteration. The method directly evaluates eigenvalues and uses inverse iteration as an option when eigenvectors are needed. This algorithm combines the advantages of existing algorithms such as QR, bisection/multisection and Cuppen's divideandconquer method. It is fully parallel, and competitive in speed with the most efficient QR algorithm in serial mode. On the other hand, our algorithm is as accurate as any standard algorithm for the symmetric tridiagonal eigenproblem and enjoys the flexibility in evaluating partial spectrum. Key words. eigenvalue, Laguerre's iteration, symmetric tridiagonal matrix 1. Introduction. For a symmetric tridiagonal matrix T with nonzero subdiagonal entries, the eigenvalues of T or the zeros of its characteristic polynomial f() = det[T \Gamma I ] (1.1) are all rea...
Preconditioned Eigensolvers  An Oxymoron?
, 1998
"... A short survey of some results on preconditioned iterative methods for symmetric eigenvalue problems is presented. The survey is by no means complete and reflects the author's personal interests and biases, with emphasis on author's own contributions. The author surveys most of the important theoret ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
A short survey of some results on preconditioned iterative methods for symmetric eigenvalue problems is presented. The survey is by no means complete and reflects the author's personal interests and biases, with emphasis on author's own contributions. The author surveys most of the important theoretical results and ideas which have appeared in the Soviet literature, adding references to work published in the western literature mainly to preserve the integrity of the topic. The aim of this paper is to introduce a systematic classification of preconditioned eigensolvers, separating the choice of a preconditioner from the choice of an iterative method. A formal definition of a preconditioned eigensolver is given. Recent developments in the area are mainly ignored, in particular, on Davidson's method. Domain decomposition methods for eigenproblems are included in the framework of preconditioned eigensolvers.
A Parallel Implementation of the Invariant Subspace Decomposition Algorithm for Dense Symmetric Matrices
, 1993
"... . We give an overview of the Invariant Subspace Decomposition Algorithm for dense symmetric matrices (SYISDA) by first describing the algorithm, followed by a discussion of a parallel implementation of SYISDA on the Intel Delta. Our implementation utilizes an optimized parallel matrix multiplication ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
. We give an overview of the Invariant Subspace Decomposition Algorithm for dense symmetric matrices (SYISDA) by first describing the algorithm, followed by a discussion of a parallel implementation of SYISDA on the Intel Delta. Our implementation utilizes an optimized parallel matrix multiplication implementation we have developed. Load balancing in the costly early stages of the algorithm is accomplished without redistribution of data between stages through the use of the block scattered decomposition. Computation of the invariant subspaces at each stage is done using a new tridiagonalization scheme due to Bischof and Sun. 1. Introduction Computation of all the eigenvalues and eigenvectors of a dense symmetric matrix is an essential kernel in many applications. The everincreasing computational power available from parallel computers offers the potential for solving much larger problems than could have been contemplated previously. Hardware scalability of parallel machines is freque...
The PRISM Project: Infrastructure and Algorithms for Parallel Eigensolvers
, 1994
"... The goal of the PRISM project is the development of infrastructure and algorithms for the parallel solution of eigenvalue problems. We are currently investigating a complete eigensolver based on the Invariant Subspace Decomposition Algorithm for dense symmetric matrices (SYISDA). After briefly revie ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
The goal of the PRISM project is the development of infrastructure and algorithms for the parallel solution of eigenvalue problems. We are currently investigating a complete eigensolver based on the Invariant Subspace Decomposition Algorithm for dense symmetric matrices (SYISDA). After briefly reviewing SYISDA, we discuss the algorithmic highlights of a distributedmemory implementation of this approach. These include a fast matrixmatrix multiplication algorithm, a new approach to parallel band reduction and tridiagonalization, and a harness for coordinating the divideandconquer parallelism in the problem. We also present performance results of these kernels as well as the overall SYISDA implementation on the Intel Touchstone Delta prototype. 1. Introduction Computation of eigenvalues and eigenvectors is an essential kernel in many applications, and several promising parallel algorithms have been investigated [29, 24, 3, 27, 21]. The work presented in this paper is part of the PRI...
A Scalable Eigenvalue Solver for Symmetric Tridiagonal Matrices
 in Proceedings of the Sixth SIAM Conference on Parallel Processing
, 1994
"... Both massively parallel computers and clusters of workstations are considered promising platforms for numerical scientific computing. This paper describes the first distributedmemory implementation of the splitmerge algorithm, an eigenvalue solver for symmetric tridiagonal matrices that uses La ..."
Abstract

Cited by 10 (9 self)
 Add to MetaCart
Both massively parallel computers and clusters of workstations are considered promising platforms for numerical scientific computing. This paper describes the first distributedmemory implementation of the splitmerge algorithm, an eigenvalue solver for symmetric tridiagonal matrices that uses Laguerre's iteration and exploits the separation property in order to create independent subtasks. Implementations of the splitmerge algorithm on both an nCUBE2 hypercube and a cluster of Sun Sparc10 workstations are described, with emphasis on load balancing, communication overhead, and interaction with other user processes. A performance study demonstrates the advantage of the new algorithm over a parallelization of the wellknown bisection algorithm. A comparison of the performance of the nCUBE2 and cluster implementations supports the claim that workstation clusters offer a costeffective alternative to massively parallel computers for certain scientific applications. This work...
Application and Accuracy of the Parallel Diagonal Dominant Algorithm
 Parallel Comput
, 1995
"... The Parallel Diagonal Dominant (PDD) algorithm is an efficient tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is extended to solve periodic tridiagonal systems and its scalability is studied. Then the reduced PDD algorithm, which has a smal ..."
Abstract

Cited by 10 (9 self)
 Add to MetaCart
The Parallel Diagonal Dominant (PDD) algorithm is an efficient tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is extended to solve periodic tridiagonal systems and its scalability is studied. Then the reduced PDD algorithm, which has a smaller operation count than that of the conventional sequential algorithm for many applications, is proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric and skewsymmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the PDD and reduced PDD algorithms are good candidates for emerging massively parallel machines. Index Terms: Parallel processing, Parallel numerical algorithms, Scalable computing, Tridiagonal systems, Toeplitz systems Manuscript received April 7, 1993; revised April 7, 1994 and January 27, 1995. This research was supported in part by the National Aeronautics and S...