Results 1  10
of
27
Applied Numerical Linear Algebra
 Society for Industrial and Applied Mathematics
, 1997
"... We survey general techniques and open problems in numerical linear algebra on parallel architectures. We rst discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing e cient algorithms. We illustrate ..."
Abstract

Cited by 531 (26 self)
 Add to MetaCart
We survey general techniques and open problems in numerical linear algebra on parallel architectures. We rst discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing e cient algorithms. We illustrate these principles using current architectures and software systems, and by showing how one would implement matrix multiplication. Then, we present direct and iterative algorithms for solving linear systems of equations, linear least squares problems, the symmetric eigenvalue problem, the nonsymmetric eigenvalue problem, and the singular value decomposition. We consider dense, band and sparse matrices.
The geometry of algorithms with orthogonality constraints
 SIAM J. MATRIX ANAL. APPL
, 1998
"... In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal proces ..."
Abstract

Cited by 384 (1 self)
 Add to MetaCart
In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal processing. In addition to the new algorithms, we show how the geometrical framework gives penetrating new insights allowing us to create, understand, and compare algorithms. The theory proposed here provides a taxonomy for numerical linear algebra algorithms that provide a top level mathematical view of previously unrelated algorithms. It is our hope that developers of new algorithms and perturbation theories will benefit from the theory, methods, and examples in this paper.
Design of a Parallel Nonsymmetric Eigenroutine Toolbox, Part I
, 1993
"... The dense nonsymmetric eigenproblem is one of the hardest linear algebra problems to solve effectively on massively parallel machines. Rather than trying to design a "black box" eigenroutine in the spirit of EISPACK or LAPACK, we propose building a toolbox for this problem. The tools are meant to ..."
Abstract

Cited by 63 (13 self)
 Add to MetaCart
The dense nonsymmetric eigenproblem is one of the hardest linear algebra problems to solve effectively on massively parallel machines. Rather than trying to design a "black box" eigenroutine in the spirit of EISPACK or LAPACK, we propose building a toolbox for this problem. The tools are meant to be used in different combinations on different problems and architectures. In this paper, we will describe these tools which include basic block matrix computations, the matrix sign function, 2dimensional bisection, and spectral divide and conquer using the matrix sign function to find selected eigenvalues. We also outline how we deal with illconditioning and potential instability. Numerical examples are included. A future paper will discuss error analysis in detail and extensions to the generalized eigenproblem.
An inverse free parallel spectral divide and conquer algorithm for nonsymmetric eigenproblems
, 1997
"... We discuss an inversefree, highly parallel, spectral divide and conquer algorithm. It can compute either an invariant subspace of a nonsymmetric matrix A, or a pair of left and right deflating subspaces of a regular matrix pencil A − λB. This algorithm is based on earlier ones of Bulgakov, Godunov ..."
Abstract

Cited by 60 (11 self)
 Add to MetaCart
We discuss an inversefree, highly parallel, spectral divide and conquer algorithm. It can compute either an invariant subspace of a nonsymmetric matrix A, or a pair of left and right deflating subspaces of a regular matrix pencil A − λB. This algorithm is based on earlier ones of Bulgakov, Godunov and Malyshev, but improves on them in several ways. This algorithm only uses easily parallelizable linear algebra building blocks: matrix multiplication and QR decomposition, but not matrix inversion. Similar parallel algorithms for the nonsymmetric eigenproblem use the matrix sign function, which requires matrix inversion and is faster but can be less stable than the new algorithm.
Nonsymmetric algebraic Riccati equations and WienerHopf factorization for Mmatrices
 SIAM J. Matrix Anal. Appl
, 2001
"... Abstract. We consider the nonsymmetric algebraic Riccati equation for which the four coefficient matrices form an Mmatrix. Nonsymmetric algebraic Riccati equations of this type appear in applied probability and transport theory. The minimal nonnegative solution of these equations can be found by Ne ..."
Abstract

Cited by 24 (11 self)
 Add to MetaCart
Abstract. We consider the nonsymmetric algebraic Riccati equation for which the four coefficient matrices form an Mmatrix. Nonsymmetric algebraic Riccati equations of this type appear in applied probability and transport theory. The minimal nonnegative solution of these equations can be found by Newton’s method and basic fixedpoint iterations. The study of these equations is also closely related to the socalled WienerHopf factorization for Mmatrices. We explain how the minimal nonnegative solution can be found by the Schur method and compare the Schur method with Newton’s method and some basic fixedpoint iterations. The development in this paper parallels that for symmetric algebraic Riccati equations arising in linear quadratic control.
On the iterative solution of a class of nonsymmetric algebraic Riccati equations
 SIAM J. Matrix Anal. Appl
"... Abstract. We consider the iterative solution of a class of nonsymmetric algebraic Riccati equations, which includes a class of algebraic Riccati equations arising in transport theory. For any equation in this class, Newton’s method and a class of basic fixedpoint iterations can be used to find its ..."
Abstract

Cited by 22 (11 self)
 Add to MetaCart
Abstract. We consider the iterative solution of a class of nonsymmetric algebraic Riccati equations, which includes a class of algebraic Riccati equations arising in transport theory. For any equation in this class, Newton’s method and a class of basic fixedpoint iterations can be used to find its minimal positive solution whenever it has a positive solution. The properties of these iterative methods are studied and some practical issues are addressed. An algorithm is then proposed to find the minimal positive solution efficiently. Numerical results are also given.
A structurepreserving doubling algorithm for nonsymmetric algebraic Riccati equation
 Numer. Math
"... In this paper we propose a structurepreserving doubling algorithm (SDA) for computing the minimal nonnegative solutions to the nonsymmetric algebraic Riccati equation (NARE) based on the techniques developed in the symmetric cases. This method allows the simultaneous approximation of the minimal no ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
In this paper we propose a structurepreserving doubling algorithm (SDA) for computing the minimal nonnegative solutions to the nonsymmetric algebraic Riccati equation (NARE) based on the techniques developed in the symmetric cases. This method allows the simultaneous approximation of the minimal nonnegative solutions of the NARE and its dual equation, only requires the solutions of two linear systems, and does not need to choose any initial matrix, thus it overcomes all the defaults of the Newton iteration method and the fixedpoint iteration methods. Under suitable conditions, we establish the convergence theory by using only the knowledge from elementary matrix theory. The theory shows that the SDA iteration matrix sequences are monotonically increasing and quadratically convergent to the minimal nonnegative solutions of the NARE and its dual equation, respectively. Numerical experiments show that the SDA algorithm is feasible and effective, and can outperform the Newton iteration method and the fixedpoint iteration methods.
A GrassmannRayleigh Quotient Iteration for Computing Invariant Subspaces
 SIAM REVIEW
, 2002
"... The classical Rayleigh quotient iteration (RQI) allows one to compute a onedimensional invariant subspace of a symmetric matrix A. Here we propose a generalization of the RQI which computes a pdimensional invariant subspace of A. Cubic convergence is preserved and the cost per iteration is low com ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
The classical Rayleigh quotient iteration (RQI) allows one to compute a onedimensional invariant subspace of a symmetric matrix A. Here we propose a generalization of the RQI which computes a pdimensional invariant subspace of A. Cubic convergence is preserved and the cost per iteration is low compared to other methods proposed in the literature.
Using The Matrix Sign Function To Compute Invariant Subspaces
 SIAM J. Matrix Anal. Appl
, 1998
"... . The matrix sign function has several applications in system theory and matrix computations. However, the numericalbehavior of the matrix sign function, and its associated divideand conquer algorithm for computing invariant subspaces, are still not completely understood. In this paper, we present ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
. The matrix sign function has several applications in system theory and matrix computations. However, the numericalbehavior of the matrix sign function, and its associated divideand conquer algorithm for computing invariant subspaces, are still not completely understood. In this paper, we present a new perturbation theory for the matrix sign function, the conditioning of its computation, the numerical stability of the divideandconquer algorithm, and iterative refinement schemes. Numerical examples are also presented. An extension of the matrix sign function based algorithm to compute left and right deflating subspaces for a regular pair of matrices is also described. Key words. matrix sign function, Newton's method, eigenvalue problem, invariant subspace, deflating subspaces AMS subject classifications. 65F15, 65F35, 65F30, 15A18 1. Introduction. Since the matrix sign function was introduced in early 1970s, it has been the subject of numerous studies and used in many applications...
Newton’s method in floating point arithmetic and iterative refinement of generalized eigenvalue problems
 SIAM J. Matrix Anal. Appl
, 1999
"... Abstract. We examine the behavior of Newton’s method in floatingpoint arithmetic, allowing for extended precision in computation of the residual, inaccurate evaluation of the Jacobian and unstable solution of the linear systems. We bound the limitingaccuracy and the smallest norm of the residual. Th ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Abstract. We examine the behavior of Newton’s method in floatingpoint arithmetic, allowing for extended precision in computation of the residual, inaccurate evaluation of the Jacobian and unstable solution of the linear systems. We bound the limitingaccuracy and the smallest norm of the residual. The application that motivates this work is iterative refinement for the generalized eigenvalue problem. We show that iterative refinement by Newton’s method can be used to improve the forward and backward errors of computed eigenpairs. Key words. Newton’s method, generalized eigenvalue problem, iterative refinement, Cholesky method, backward error, forward error, roundingerror analysis, limitingaccuracy, limitingresidual AMS subject classifications. 65F15, 65F35 PII. S0895479899359837