Results 1  10
of
19
Design of a Parallel Nonsymmetric Eigenroutine Toolbox, Part I
, 1993
"... The dense nonsymmetric eigenproblem is one of the hardest linear algebra problems to solve effectively on massively parallel machines. Rather than trying to design a "black box" eigenroutine in the spirit of EISPACK or LAPACK, we propose building a toolbox for this problem. The tools are meant to ..."
Abstract

Cited by 63 (14 self)
 Add to MetaCart
The dense nonsymmetric eigenproblem is one of the hardest linear algebra problems to solve effectively on massively parallel machines. Rather than trying to design a "black box" eigenroutine in the spirit of EISPACK or LAPACK, we propose building a toolbox for this problem. The tools are meant to be used in different combinations on different problems and architectures. In this paper, we will describe these tools which include basic block matrix computations, the matrix sign function, 2dimensional bisection, and spectral divide and conquer using the matrix sign function to find selected eigenvalues. We also outline how we deal with illconditioning and potential instability. Numerical examples are included. A future paper will discuss error analysis in detail and extensions to the generalized eigenproblem.
Using The Matrix Sign Function To Compute Invariant Subspaces
 SIAM J. Matrix Anal. Appl
, 1998
"... . The matrix sign function has several applications in system theory and matrix computations. However, the numericalbehavior of the matrix sign function, and its associated divideand conquer algorithm for computing invariant subspaces, are still not completely understood. In this paper, we present ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
. The matrix sign function has several applications in system theory and matrix computations. However, the numericalbehavior of the matrix sign function, and its associated divideand conquer algorithm for computing invariant subspaces, are still not completely understood. In this paper, we present a new perturbation theory for the matrix sign function, the conditioning of its computation, the numerical stability of the divideandconquer algorithm, and iterative refinement schemes. Numerical examples are also presented. An extension of the matrix sign function based algorithm to compute left and right deflating subspaces for a regular pair of matrices is also described. Key words. matrix sign function, Newton's method, eigenvalue problem, invariant subspace, deflating subspaces AMS subject classifications. 65F15, 65F35, 65F30, 15A18 1. Introduction. Since the matrix sign function was introduced in early 1970s, it has been the subject of numerous studies and used in many applications...
Least squares residuals and minimal residual methods
 SIAM J. Sci. Comput
"... Abstract. We study Krylov subspace methods for solving unsymmetriclinear algebraicsystems that minimize the norm of the residual at each step (minimal residual (MR) methods). MR methods are often formulated in terms of a sequence of least squares (LS) problems of increasing dimension. We present sev ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Abstract. We study Krylov subspace methods for solving unsymmetriclinear algebraicsystems that minimize the norm of the residual at each step (minimal residual (MR) methods). MR methods are often formulated in terms of a sequence of least squares (LS) problems of increasing dimension. We present several basicidentities and bounds for the LS residual. These results are interesting in the general context of solving LS problems. When applied to MR methods, they show that the size of the MR residual is strongly related to the conditioning of different bases of the same Krylov subspace. Using different bases is useful in theory because relating convergence to the characteristics of different bases offers new insight into the behavior of MR methods. Different bases also lead to different implementations which are mathematically equivalent but can differ numerically. Our theoretical results are used for a finite precision analysis of implementations of the GMRES method [Y. Saad and M. H. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856–869]. We explain that the choice of the basis is fundamental for the numerical stability of the implementation. As demonstrated in the case of Simpler GMRES [H. F. Walker and L. Zhou, Numer. Linear Algebra Appl., 1 (1994), pp. 571–581], the best orthogonalization technique used for computing the basis does not compensate for the loss of accuracy due to an inappropriate choice of the basis. In particular, we prove that Simpler GMRES is inherently less numerically stable than
Choosing Poles So That The SingleInput Pole Placement Problem Is Wellconditioned, preprint
 SFB 393/9601, Sonderforschungsbereich 393, Numerische Simulation auf massivparallelen Rechnern, Fak. f. Mathematik, TU ChemnitzZwickau, D09107
, 1996
"... Abstract. We discuss the singleinput pole placement problem (SIPP) and analyze how the conditioning of the problem can be estimated and improved if the poles are allowed to vary in specific regions in the complex plane. Under certain assumptions we give formulas as well as bounds for the norm of th ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Abstract. We discuss the singleinput pole placement problem (SIPP) and analyze how the conditioning of the problem can be estimated and improved if the poles are allowed to vary in specific regions in the complex plane. Under certain assumptions we give formulas as well as bounds for the norm of the feedback gain and the condition number of the closed loop matrix. Via several numerical examples we demonstrate how these results can be used to estimate the condition number of a given SIPP problem and also demonstrate how to select the poles to improve the conditioning.
Approximate diagonalization
 SIAM J. Matrix Anal. Applic
, 2007
"... Abstract. We describe a new method of computing functions of highly nonnormal matrices, by using the concept of approximate diagonalization. We formulate a conjecture about its efficiency, and provide both theoretical and numerical evidence in support of the conjecture. We apply the method to compu ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Abstract. We describe a new method of computing functions of highly nonnormal matrices, by using the concept of approximate diagonalization. We formulate a conjecture about its efficiency, and provide both theoretical and numerical evidence in support of the conjecture. We apply the method to compute arbitrary real powers of highly nonnormal matrices. Key words. Jordan matrices, illposedness, regularization, fractional powers, functional calculus, spectral theory.
Construction and Analysis of Polynomial Iterative Methods for NonHermitian Systems of Linear Equations
, 1998
"... apier nach 1 ISO 9706 Contents 1 Introduction 7 1.1 What is a PIM? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2 Different types of PIMs . . . . . . . . . . . . . . . . . . . . . . . 8 1.3 Organization and summary of our results . . . . . . . . . . . . . 9 2 Background 13 2.1 Krylo ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
apier nach 1 ISO 9706 Contents 1 Introduction 7 1.1 What is a PIM? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2 Different types of PIMs . . . . . . . . . . . . . . . . . . . . . . . 8 1.3 Organization and summary of our results . . . . . . . . . . . . . 9 2 Background 13 2.1 Krylov spaces and the Arnoldi process . . . . . . . . . . . . . . . 13 2.2 Exterior mapping functions and Faber polynomials . . . . . . . . 14 2.3 Inclusion sets and asymptotic analysis . . . . . . . . . . . . . . . 15 3 Inclusion sets generated by the conformal 'bratwurst' maps 19 3.1 Derivation of the maps . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2 Definition and properties of the 'bratwurst' shape sets . . . . . . 23 3.3 Numerical examples . . . . . . . . . . . . . . . . . . . . . . . . . 25 4 The hybrid ABF method for nonhermitian linear systems 29 4.1 Faber polynomials for the inclusion sets
PrimalDual Interior Point Methods For Semidefinite Programming In Finite Precision
 SIAM J. Optimization
, 1997
"... . Recently, a number of primaldual interiorpoint methods for semidefinite programming have been developed. To reduce the number of floating point operations, each iteration of these methods typically performs block Gaussian elimination with block pivots that are close to singular near the optimal ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
. Recently, a number of primaldual interiorpoint methods for semidefinite programming have been developed. To reduce the number of floating point operations, each iteration of these methods typically performs block Gaussian elimination with block pivots that are close to singular near the optimal solution. As a result, these methods often exhibit complex numerical properties in practice. We consider numerical issues related to some of these methods. Our error analysis indicates that these methods could be numerically stable if certain coefficient matrices associated with the iterations are wellconditioned, but are unstable otherwise. With this result, we explain why one particular method, the one introduced by Alizadeh, Haeberly and Overton is in general more stable than others. We also explain why the so called least squares variation, introduced for some of these methods, does not yield more numerical accuracy in general. Finally, we present results from our numerical experiments ...
Stability of block LU factorization
 Numer. Lin. Algebra Applic
, 1995
"... Abstract. Many of the currently popular \block algorithms " are scalar algorithms in which the operations have been grouped and reordered into matrix operations. One genuine block algorithm in practical use is block LU factorization, and this has recently been shown by Demmel and Higham to be unstab ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract. Many of the currently popular \block algorithms " are scalar algorithms in which the operations have been grouped and reordered into matrix operations. One genuine block algorithm in practical use is block LU factorization, and this has recently been shown by Demmel and Higham to be unstable in general. It is shown here that block LU factorization is stable if A is block diagonally dominant by columns. Moreover, for a general matrix the level of instability in block LU factorization can be bounded in terms of the condition number (A) and the growth factor for Gaussian elimination without pivoting. A consequence is that block LU factorization is stable for a matrix A that is symmetric positive de nite or point diagonally dominant byrows or columns as long as A is wellconditioned.
Some New Search Directions for PrimalDual Interior Point Methods in Semidefinite Programming
"... Search directions for primaldual pathfollowing methods for semidefinite programming (SDP) are proposed. These directions have the properties that (1) under certain nondegeneracy and strict complementarity assumptions, the Jacobian matrix of the associated symmetrized Newton equation has bounded co ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
Search directions for primaldual pathfollowing methods for semidefinite programming (SDP) are proposed. These directions have the properties that (1) under certain nondegeneracy and strict complementarity assumptions, the Jacobian matrix of the associated symmetrized Newton equation has bounded condition number along the central path in the limit as the barrier parameter tends to zero; (2) the Schur complement matrix of the symmetrized Newton equation is symmetric and the cost for computing this matrix is 2mn 3 + 0:5m 2 n 2 ops, where n and m are the dimension of the matrix and vector variables of the SDP, respectively. These two properties imply that a pathfollowing method using the proposed directions can achieve the high accuracy typically attained by methods employing the direction proposed by Alizadeh, Haeberly, and Overton (currently the best search direction in terms of accuracy), but each iteration requires at most half the amount of flops (to leading order).