Results 1  10
of
78
Nearly Optimal Algorithms For Canonical Matrix Forms
, 1993
"... A Las Vegas type probabilistic algorithm is presented for finding the Frobenius canonical form of an n x n matrix T over any field K. The algorithm requires O~(MM(n)) = MM(n) (log n) ^ O(1) operations in K, where O(MM(n)) operations in K are sufficient to multiply two n x n matrices over K. This nea ..."
Abstract

Cited by 56 (11 self)
 Add to MetaCart
A Las Vegas type probabilistic algorithm is presented for finding the Frobenius canonical form of an n x n matrix T over any field K. The algorithm requires O~(MM(n)) = MM(n) (log n) ^ O(1) operations in K, where O(MM(n)) operations in K are sufficient to multiply two n x n matrices over K. This nearly matches the lower bound of \Omega(MM(n)) operations in K for this problem, and improves on the O(n^4) operations in K required by the previously best known algorithms. We also demonstrate a fast parallel implementation of our algorithm for the Frobenius form, which is processorefficient on a PRAM. As an application we give an algorithm to evaluate a polynomial g(x) in K[x] at T which requires only O~(MM(n)) operations in K when deg g < n^2. Other applications include sequential and parallel algorithms for computing the minimal and characteristic polynomials of a matrix, the rational Jordan form of a matrix, for testing whether two matrices are similar, and for matrix powering, which are substantially faster than those previously known.
On the TimeSpace Complexity of Geometric Elimination Procedures
, 1999
"... In [25] and [22] a new algorithmic concept was introduced for the symbolic solution of a zero dimensional complete intersection polynomial equation system satisfying a certain generic smoothness condition. The main innovative point of this algorithmic concept consists in the introduction of a new ge ..."
Abstract

Cited by 26 (17 self)
 Add to MetaCart
In [25] and [22] a new algorithmic concept was introduced for the symbolic solution of a zero dimensional complete intersection polynomial equation system satisfying a certain generic smoothness condition. The main innovative point of this algorithmic concept consists in the introduction of a new geometric invariant, called the degree of the input system, and the proof that the most common elimination problems have time complexity which is polynomial in this degree and the length of the input.
Random Butterfly Transformations with Applications in Computational Linear Algebra
, 1995
"... Theory and practice of computational linear algebra differ over the issue of degeneracy. Block matrix decompositions are used heavily in theory, but less in practice, since even when a matrix is nondegenerate (has full rank) its block submatrices can be degenerate. The potential degeneracy of block ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
Theory and practice of computational linear algebra differ over the issue of degeneracy. Block matrix decompositions are used heavily in theory, but less in practice, since even when a matrix is nondegenerate (has full rank) its block submatrices can be degenerate. The potential degeneracy of block submatrices can completely prevent practical use of block matrix algorithms. Gaussian elimination is an important example of an algorithm affected by the possibility of degeneracy. While the basic elimination procedure is simple to state and implement, it becomes more complicated with the addition of a pivoting procedure, which handles degenerate matrices having zeros on the diagonal. Pivoting can significantly complicate the algorithm, increase data movement, and reduce speed, particularly on highperformance computers. We propose a randomization scheme that preconditions an input matrix by multiplying it with random matrices, where this multiplication can be performed efficiently. At the e...
The tortoise and the hare restart GMRES
 SIAM Review
"... Abstract. When solving large nonsymmetric systems of linear equations with the restarted GMRES algorithm, one is inclined to select a relatively large restart parameter in the hope of mimicking the full GMRES process. Surprisingly, cases exist where small values of the restart parameter yield conver ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
Abstract. When solving large nonsymmetric systems of linear equations with the restarted GMRES algorithm, one is inclined to select a relatively large restart parameter in the hope of mimicking the full GMRES process. Surprisingly, cases exist where small values of the restart parameter yield convergence in fewer iterations than larger values. Here, two simple examples are presented where GMRES(1) converges exactly in three iterations, while GMRES(2) stagnates. One of these examples reveals that GMRES(1) convergence can be extremely sensitive to small changes in the initial residual.
A Parallel Algorithm For Computing The Extremal Eigenvalues Of Very Large Sparse Matrices
 Lecture Notes in Computer Science
, 1998
"... . Quantum mechanics often give rise to problems where one needs to find a few eigenvalues of very large sparse matrices. The size of the matrices is such that it is not possible to store them in main memory but instead they must be generated on the fly. In this paper the method of coordinate relaxat ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
(Show Context)
. Quantum mechanics often give rise to problems where one needs to find a few eigenvalues of very large sparse matrices. The size of the matrices is such that it is not possible to store them in main memory but instead they must be generated on the fly. In this paper the method of coordinate relaxation is applied to one class of such problems. A parallel algorithm based on graph coloring is proposed. Experimental results on a Cray Origin 2000 computer show that the algorithm converges fast ant that it also scales well as more processors are applied. Comparisons show that the convergence of the presented algorithm is much faster on the given test problems than using ARPACK [10]. Key words. sparse matrix algorithms, eigenvalue computation, parallel computing, graph coloring, Cray Origin 2000 AMS subject classifications. 05C50, 05C85, 15A18, 65F15, 65F50, 65Y05, 65Y20 1. Introduction. Frequently problems in quantum mechanics lead to the computation of a small number of extremal eigenval...
Extrapolation Techniques for IllConditioned Linear Systems
 Numer. Math
, 1998
"... In this paper, the regularized solutions of an illconditioned system of linear equations are computed for several values of the regularization parameter . Then, these solutions are extrapolated at = 0 by various vector rational extrapolations techniques built for that purpose. These techniques ar ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
(Show Context)
In this paper, the regularized solutions of an illconditioned system of linear equations are computed for several values of the regularization parameter . Then, these solutions are extrapolated at = 0 by various vector rational extrapolations techniques built for that purpose. These techniques are justified by an analysis of the regularized solutions based on the singular value decomposition and the generalized singular value decomposition. Numerical results illustrate the effectiveness of the procedures. 1
Preconditioning sparse nonsymmetric linear systems with the ShermanMorrison formula
 SIAM J. SCI. COMPUT
, 2003
"... Let Ax = b be a large, sparse, nonsymmetricsystem of linear equations. A new sparse approximate inverse preconditioning technique for such a class of systems is proposed. We show how the matrix A −1 0 − A−1, where A0 is a nonsingular matrix whose inverse is known or easy to compute, can be factorize ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Let Ax = b be a large, sparse, nonsymmetricsystem of linear equations. A new sparse approximate inverse preconditioning technique for such a class of systems is proposed. We show how the matrix A −1 0 − A−1, where A0 is a nonsingular matrix whose inverse is known or easy to compute, can be factorized in the form UΩV T using the Sherman–Morrison formula. When this factorization process is done incompletely, an approximate factorization may be obtained and used as a preconditioner for Krylov iterative methods. For A0 = sIn, where In is the identity matrix and s is a positive scalar, the existence of the preconditioner for Mmatrices is proved. In addition, some numerical experiments obtained for a representative set of matrices are presented. Results show that our approach is comparable with other existing approximate inverse techniques.
Variational Analysis Of Some Conjugate Gradient Methods
 EastWest J. of Numer. Math
, 1989
"... . A number of conjugate gradient methods are considered for a class of linear systems of real algebraic equations. This class includes all symmetric and certain special nonsymmetric problems, which give rise to threeterm recursions. All the algorithms are characterized variationally. This makes it ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
. A number of conjugate gradient methods are considered for a class of linear systems of real algebraic equations. This class includes all symmetric and certain special nonsymmetric problems, which give rise to threeterm recursions. All the algorithms are characterized variationally. This makes it possible to derive error estimates systematically in terms of certain polynomial approximation problems. Bounds are obtained, which are functions of the extreme eigenvalues of the basic iteration operator. Key Words. linear systems, conjugate gradients, variational formulation, sparse matrices, error bounds AMS(MOS) subject classification. 65F10, 65F50, 65G99 1. Introduction. It is the purpose of this survey article to present some standard and notsostandard conjugate gradient methods in a common framework. We consider iterative methods for solving n \Theta n linear systems of real algebraic equations Ax = b : (1) We will focus our attention on general symmetric, not necessarily definite...
A Direct Projection Method For Sparse Linear Systems
 SIAM J. SCI. COMPUT
, 1995
"... An oblique projection method is adapted to solve large, sparse, unstructured systems of linear equations. This rowprojection technique is a direct method which can be interpreted as an oblique Kaczmarztype algorithm, and is also related to other standard solution methods. When a sparsitypreservin ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
An oblique projection method is adapted to solve large, sparse, unstructured systems of linear equations. This rowprojection technique is a direct method which can be interpreted as an oblique Kaczmarztype algorithm, and is also related to other standard solution methods. When a sparsitypreserving pivoting strategy is incorporated, it is demonstrated that the technique can be superior, in terms of both fillin and arithmetic complexity, to more standard sparse algorithms based on gaussian elimination. This is especially true for systems arising from stiff ordinary differential equations problems in chemical kinetics studies.
GIANFRANCO CIMMINO’S CONTRIBUTIONS TO NUMERICAL MATHEMATICS
, 2004
"... Gianfranco Cimmino (19081989) authored several papers in the field of numerical analysis, and particularly in the area of matrix computations. His most important contribution in this field is the iterative method for solving linear algebraic systems that bears his name, published in 1938. This pape ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Gianfranco Cimmino (19081989) authored several papers in the field of numerical analysis, and particularly in the area of matrix computations. His most important contribution in this field is the iterative method for solving linear algebraic systems that bears his name, published in 1938. This paper reviews Cimmino’s main contributions to numerical mathematics, together with subsequent developments inspired by his work. Some background information on Italian mathematics and on Mauro Picone’s Istituto Nazionale per le Applicazioni del Calcolo, where Cimmino’s early numerical work took place, is provided. The lasting importance of Cimmino’s work in various application areas is demonstrated by an analysis of citation patterns in the broad technical and scientific literature.