Results 1  10
of
54
Nearly Optimal Algorithms For Canonical Matrix Forms
, 1993
"... A Las Vegas type probabilistic algorithm is presented for finding the Frobenius canonical form of an n x n matrix T over any field K. The algorithm requires O~(MM(n)) = MM(n) (log n) ^ O(1) operations in K, where O(MM(n)) operations in K are sufficient to multiply two n x n matrices over K. This nea ..."
Abstract

Cited by 56 (11 self)
 Add to MetaCart
A Las Vegas type probabilistic algorithm is presented for finding the Frobenius canonical form of an n x n matrix T over any field K. The algorithm requires O~(MM(n)) = MM(n) (log n) ^ O(1) operations in K, where O(MM(n)) operations in K are sufficient to multiply two n x n matrices over K. This nearly matches the lower bound of \Omega(MM(n)) operations in K for this problem, and improves on the O(n^4) operations in K required by the previously best known algorithms. We also demonstrate a fast parallel implementation of our algorithm for the Frobenius form, which is processorefficient on a PRAM. As an application we give an algorithm to evaluate a polynomial g(x) in K[x] at T which requires only O~(MM(n)) operations in K when deg g < n^2. Other applications include sequential and parallel algorithms for computing the minimal and characteristic polynomials of a matrix, the rational Jordan form of a matrix, for testing whether two matrices are similar, and for matrix powering, which are substantially faster than those previously known.
On the TimeSpace Complexity of Geometric Elimination Procedures
, 1999
"... In [25] and [22] a new algorithmic concept was introduced for the symbolic solution of a zero dimensional complete intersection polynomial equation system satisfying a certain generic smoothness condition. The main innovative point of this algorithmic concept consists in the introduction of a new ge ..."
Abstract

Cited by 23 (16 self)
 Add to MetaCart
In [25] and [22] a new algorithmic concept was introduced for the symbolic solution of a zero dimensional complete intersection polynomial equation system satisfying a certain generic smoothness condition. The main innovative point of this algorithmic concept consists in the introduction of a new geometric invariant, called the degree of the input system, and the proof that the most common elimination problems have time complexity which is polynomial in this degree and the length of the input.
Random Butterfly Transformations with Applications in Computational Linear Algebra
, 1995
"... Theory and practice of computational linear algebra differ over the issue of degeneracy. Block matrix decompositions are used heavily in theory, but less in practice, since even when a matrix is nondegenerate (has full rank) its block submatrices can be degenerate. The potential degeneracy of block ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
Theory and practice of computational linear algebra differ over the issue of degeneracy. Block matrix decompositions are used heavily in theory, but less in practice, since even when a matrix is nondegenerate (has full rank) its block submatrices can be degenerate. The potential degeneracy of block submatrices can completely prevent practical use of block matrix algorithms. Gaussian elimination is an important example of an algorithm affected by the possibility of degeneracy. While the basic elimination procedure is simple to state and implement, it becomes more complicated with the addition of a pivoting procedure, which handles degenerate matrices having zeros on the diagonal. Pivoting can significantly complicate the algorithm, increase data movement, and reduce speed, particularly on highperformance computers. We propose a randomization scheme that preconditions an input matrix by multiplying it with random matrices, where this multiplication can be performed efficiently. At the e...
A Parallel Algorithm For Computing The Extremal Eigenvalues Of Very Large Sparse Matrices
 Lecture Notes in Computer Science
, 1998
"... . Quantum mechanics often give rise to problems where one needs to find a few eigenvalues of very large sparse matrices. The size of the matrices is such that it is not possible to store them in main memory but instead they must be generated on the fly. In this paper the method of coordinate relaxat ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
. Quantum mechanics often give rise to problems where one needs to find a few eigenvalues of very large sparse matrices. The size of the matrices is such that it is not possible to store them in main memory but instead they must be generated on the fly. In this paper the method of coordinate relaxation is applied to one class of such problems. A parallel algorithm based on graph coloring is proposed. Experimental results on a Cray Origin 2000 computer show that the algorithm converges fast ant that it also scales well as more processors are applied. Comparisons show that the convergence of the presented algorithm is much faster on the given test problems than using ARPACK [10]. Key words. sparse matrix algorithms, eigenvalue computation, parallel computing, graph coloring, Cray Origin 2000 AMS subject classifications. 05C50, 05C85, 15A18, 65F15, 65F50, 65Y05, 65Y20 1. Introduction. Frequently problems in quantum mechanics lead to the computation of a small number of extremal eigenval...
The tortoise and the hare restart GMRES
 SIAM Review
"... Abstract. When solving large nonsymmetric systems of linear equations with the restarted GMRES algorithm, one is inclined to select a relatively large restart parameter in the hope of mimicking the full GMRES process. Surprisingly, cases exist where small values of the restart parameter yield conver ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
Abstract. When solving large nonsymmetric systems of linear equations with the restarted GMRES algorithm, one is inclined to select a relatively large restart parameter in the hope of mimicking the full GMRES process. Surprisingly, cases exist where small values of the restart parameter yield convergence in fewer iterations than larger values. Here, two simple examples are presented where GMRES(1) converges exactly in three iterations, while GMRES(2) stagnates. One of these examples reveals that GMRES(1) convergence can be extremely sensitive to small changes in the initial residual.
Extrapolation Techniques for IllConditioned Linear Systems
 Numer. Math
, 1998
"... In this paper, the regularized solutions of an illconditioned system of linear equations are computed for several values of the regularization parameter . Then, these solutions are extrapolated at = 0 by various vector rational extrapolations techniques built for that purpose. These techniques ar ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
In this paper, the regularized solutions of an illconditioned system of linear equations are computed for several values of the regularization parameter . Then, these solutions are extrapolated at = 0 by various vector rational extrapolations techniques built for that purpose. These techniques are justified by an analysis of the regularized solutions based on the singular value decomposition and the generalized singular value decomposition. Numerical results illustrate the effectiveness of the procedures. 1
Variational Analysis Of Some Conjugate Gradient Methods
 EastWest J. of Numer. Math
, 1989
"... . A number of conjugate gradient methods are considered for a class of linear systems of real algebraic equations. This class includes all symmetric and certain special nonsymmetric problems, which give rise to threeterm recursions. All the algorithms are characterized variationally. This makes it ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
. A number of conjugate gradient methods are considered for a class of linear systems of real algebraic equations. This class includes all symmetric and certain special nonsymmetric problems, which give rise to threeterm recursions. All the algorithms are characterized variationally. This makes it possible to derive error estimates systematically in terms of certain polynomial approximation problems. Bounds are obtained, which are functions of the extreme eigenvalues of the basic iteration operator. Key Words. linear systems, conjugate gradients, variational formulation, sparse matrices, error bounds AMS(MOS) subject classification. 65F10, 65F50, 65G99 1. Introduction. It is the purpose of this survey article to present some standard and notsostandard conjugate gradient methods in a common framework. We consider iterative methods for solving n \Theta n linear systems of real algebraic equations Ax = b : (1) We will focus our attention on general symmetric, not necessarily definite...
A Direct Projection Method For Sparse Linear Systems
 SIAM J. SCI. COMPUT
, 1995
"... An oblique projection method is adapted to solve large, sparse, unstructured systems of linear equations. This rowprojection technique is a direct method which can be interpreted as an oblique Kaczmarztype algorithm, and is also related to other standard solution methods. When a sparsitypreservin ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
An oblique projection method is adapted to solve large, sparse, unstructured systems of linear equations. This rowprojection technique is a direct method which can be interpreted as an oblique Kaczmarztype algorithm, and is also related to other standard solution methods. When a sparsitypreserving pivoting strategy is incorporated, it is demonstrated that the technique can be superior, in terms of both fillin and arithmetic complexity, to more standard sparse algorithms based on gaussian elimination. This is especially true for systems arising from stiff ordinary differential equations problems in chemical kinetics studies.
How to Eliminate Pivoting from Gaussian Elimination  By Randomizing Instead
, 1995
"... Gaussian elimination is probably the best known and most widely used method for solving linear systems, computing determinants, and finding matrix decompositions. While the basic elimination procedure is simple to state and implement, it becomes more complicated with the addition of a pivoting proce ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
Gaussian elimination is probably the best known and most widely used method for solving linear systems, computing determinants, and finding matrix decompositions. While the basic elimination procedure is simple to state and implement, it becomes more complicated with the addition of a pivoting procedure, which handles degenerate matrices having zero elements on the diagonal. Pivoting can significantly complicate the algorithm, increase data movement, and reduce speed, particularly on highperformance computers. In this paper we propose an alternative scheme for performing Gaussian elimination that first preconditions the input matrix by multiplying it with random matrices, whose inverses can be applied subsequently. At the expense of these multiplications, and making the linear system dense if it was not already, this approach makes the system `nondegenerate'  subsystems have full rank  with probability 1. This preconditioning has the effect of (almost certainly) eliminating the ...
The Complexity of the Algebraic Eigenproblem
, 1998
"... The eigenproblem for an nbyn matrix A is the problem of the approximation (within a relative error bound 2 \Gammab ) of all the eigenvalues of the matrix A and computing the associated eigenspaces of all these eigenvalues. We show that the arithmetic complexity of this problem is bounded by O(n ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
The eigenproblem for an nbyn matrix A is the problem of the approximation (within a relative error bound 2 \Gammab ) of all the eigenvalues of the matrix A and computing the associated eigenspaces of all these eigenvalues. We show that the arithmetic complexity of this problem is bounded by O(n 3 + (n log 2 n) log b). If the characteristic and minimum polynomials of the matrix A coincide with each other (which is the case for generic matrices of all classes of general and special matrices that we consider), then the latter deterministic cost bound can be replaced by the randomized bound O(KA (2n) + n 2 + (n log 2 n) log b) where KA (2n) denotes the cost of the computation of the 2n \Gamma 1 vectors A i v, i = 1; : : : ; 2n \Gamma 1, maximized over all ndimensional vectors v; KA (2n) = O(M(n) log n), for M(n) = o(n 2:376 ) denoting the arithmetic complexity of n \Theta n matrix multiplication. This bound on the complexity of the eigenproblem is optimal up to a logar...