Results 1  10
of
14
On the complexity of polynomial matrix computations
 Proceedings of the 2003 International Symposium on Symbolic and Algebraic Computation
, 2003
"... ..."
A study of Coppersmith's block Wiedemann algorithm using matrix polynomials
 LMCIMAG, REPORT # 975 IM
, 1997
"... We analyse a randomized block algorithm proposed by Coppersmith for solving large sparse systems of linear equations, Aw = 0, over a finite field K =GF(q). It is a modification of an algorithm of Wiedemann. Coppersmith has given heuristic arguments to understand why the algorithm works. But it was a ..."
Abstract

Cited by 24 (7 self)
 Add to MetaCart
We analyse a randomized block algorithm proposed by Coppersmith for solving large sparse systems of linear equations, Aw = 0, over a finite field K =GF(q). It is a modification of an algorithm of Wiedemann. Coppersmith has given heuristic arguments to understand why the algorithm works. But it was an open question to prove that it may produce a solution, with positive probability, for small finite fields e.g. for K =GF(2). We answer this question nearly completely. The algorithm uses two random matrices X and Y of dimensions m \Theta N and N \Theta n. Over any finite field, we show how the parameters m and n of the algorithm may be tuned so that, for any input system, a solution is computed with high probability. Conversely, for certain particular input systems, we show that the conditions on the input parameters may be relaxed to ensure the success. We also improve the probability bound of Kaltofen in the case of large cardinality fields. Lastly, for the sake of completeness of the...
Fractionfree Computation of Matrix Rational Interpolants and Matrix GCDs
, 2000
"... We present a new set of algorithms for computation of matrix rational interpolants and onesided matrix greatest common divisors. Examples of these interpolants include Pad'e approximants, NewtonPad'e, Pad'eHermite, simultaneous Pad'e approximants and more generally MPad'e approximants along with ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
We present a new set of algorithms for computation of matrix rational interpolants and onesided matrix greatest common divisors. Examples of these interpolants include Pad'e approximants, NewtonPad'e, Pad'eHermite, simultaneous Pad'e approximants and more generally MPad'e approximants along with their matrix generalizations. The algorithms are fast and compute all solutions to a given problem. Solutions for all (possibly singular) subproblems along offdiagonal paths in a solution table are also computed by stepping around singular blocks on some path corresponding to "closest" regular interpolation problems. The algorithms are suitable for computation in exact arithmetic domains where growth of coefficients in intermediate computations are a central concern. This coefficient growth is avoided by using fractionfree methods. At the same time the methods are fast in the sense that they are at least an order of magnitude faster than existing fractionfree methods for the corresponding problems. The methods make use of linear systems having a special striped Krylov structure. Key words: Hermite Pad'e approximant, simultaneous Pad'e approximant, striped Krylov matrices, Fractionfree arithmetic Subject Classifications: AMS(MOS): 65D05, 41A21, CR: G.1.2 1 1
Shifted Normal Forms of Polynomial Matrices
 Proceeding of International Symposium on Symbolic and Algebraic Computation, ISSAC’99
, 1999
"... In this paper we study the problem of transforming, via invertible column operations, a matrix polynomial into a variety of shifted forms. Examples of forms covered in our framework include a column reduced form, a triangular form, a Hermite normal form or a Popov normal form along with their shifte ..."
Abstract

Cited by 16 (9 self)
 Add to MetaCart
In this paper we study the problem of transforming, via invertible column operations, a matrix polynomial into a variety of shifted forms. Examples of forms covered in our framework include a column reduced form, a triangular form, a Hermite normal form or a Popov normal form along with their shifted counterparts. By obtaining degree bounds for unimodular multipliers of shifted Popov forms we are able to embed the problem of computing a normal form into one of determining a shifted form of a minimal polynomial basis for an associated matrix polynomial. Shifted minimal polynomial bases can be computed via sigma bases [1, 2] and in Popov form via Mahler systems [5]. The latter method gives a fractionfree algorithm for computing matrix normal forms. Key words: Popov Form, Hermite Normal Form, 1 Introduction Matrix polynomial arithmetic is fundamental to many applications in science and engineering. It is encountered in linear systems theory [12], determining minimal partial realization...
Normal Forms for General Polynomial Matrices
, 2001
"... We present an algorithm for the computation of a shifted Popov Normal Form of a rectangular polynomial matrix. For speci c input shifts, we obtain methods for computing the matrix greatest common divisor of two matrix polynomials (in normal form) or such polynomial normal form computation as the ..."
Abstract

Cited by 15 (10 self)
 Add to MetaCart
We present an algorithm for the computation of a shifted Popov Normal Form of a rectangular polynomial matrix. For speci c input shifts, we obtain methods for computing the matrix greatest common divisor of two matrix polynomials (in normal form) or such polynomial normal form computation as the classical Popov form and the Hermite Normal Form.
Reliable Numerical Methods for Polynomial Matrix Triangularization
 IEEE Transactions on Automatic Control
, 1999
"... : Numerical procedures are proposed for triangularizing polynomial matrices over the field of polynomial fractions and over the ring of polynomials. They are based on two standard polynomial techniques: Sylvester matrices and interpolation. In contrast to other triangularization methods, the algorit ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
: Numerical procedures are proposed for triangularizing polynomial matrices over the field of polynomial fractions and over the ring of polynomials. They are based on two standard polynomial techniques: Sylvester matrices and interpolation. In contrast to other triangularization methods, the algorithms described in this paper only rely on wellworked numerically reliable tools. They can also be used for greatest common divisor extraction, polynomial rank evaluation or polynomial nullspace computation. Key Words : Triangularization, Polynomial Matrices, Numerical Methods. y This work is part of the Barrande Project No. 97/00597/026. It was also supported by the Grant Agency of the Czech Republic under contract No. 102/97/0861, by the Ministry of Education of the Czech Republic under contract No. VS97/034 and by the French Ministry of Education and Research under contract No. 10INSA96. z corresponding author. Email: henrion@laas.fr. FAX: (33 5) 61 33 69 69. 1 Introduction A com...
A Linear Space Algorithm for Computing the Hermite Normal Form
 Proceedings ISSAC 2001, Lecture Notes in Computer Sci., 2146
, 2001
"... Computing the Hermite Normal Form of an n n integer matrix using the best current algorithms typically requires O(n 3 log M) space, where M is a bound on the entries of the input matrix. Although polynomial in the input size (which is O(n 2 log M)), this space blowup can easily become a seriou ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Computing the Hermite Normal Form of an n n integer matrix using the best current algorithms typically requires O(n 3 log M) space, where M is a bound on the entries of the input matrix. Although polynomial in the input size (which is O(n 2 log M)), this space blowup can easily become a serious issue in practice when working on big integer matrices. In this paper we present a new algorithm for computing the Hermite Normal Form which uses only O(n 2 log M) space (i.e., essentially the same as the input size). When implemented using standard algorithms for integer and matrix multiplication, our algorithm has the same time complexity of the asymptotically fastest (but space inecient) algorithms. We also present a heuristic algorithm for HNF that achieves a substantial speedup when run on randomly generated input matrices.
Algorithms for Normal Forms for Matrices of Polynomials and Ore Polynomials
, 2003
"... I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ii In this thesis we study algorithms for computing norma ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ii In this thesis we study algorithms for computing normal forms for matrices of Ore polynomials while controlling coefficient growth. By formulating row reduction as a linear algebra problem, we obtain a fractionfree algorithm for row reduction for matrices of Ore polynomials. The algorithm allows us to compute the rank and a basis of the left nullspace of the input matrix. When the input is restricted to matrices of shift polynomials and ordinary polynomials, we obtain fractionfree algorithms for computing rowreduced forms and weak Popov forms. These algorithms can be used to compute a greatest common right divisor and a least common left multiple of such matrices. Our fractionfree row reduction algorithm can be viewed as a generalization of subresultant algorithms. The linear algebra formulation allows us to obtain bounds on the size of the intermediate results and to analyze the complexity of our algorithms. We then make use of the fractionfree algorithm as a basis to formulate modular algorithms for computing a rowreduced form, a weak Popov form, and the Popov form of a polynomial matrix. By examining the linear algebra formulation, we develop criteria for detecting unlucky homomorphisms and determining the number of homomorphic images required. iii Acknowledgements I wish to thank my supervisor, George Labahn, for his support for the duration of this work. Without his suggestions, criticisms, friendship, and financial support, it would not have been possible to complete this work. I also learned much from Bernhard Beckermann when we worked together to obtain some of the results reported in this thesis. I also wish to thank Dr. Keith Geddes, Dr. Mark Giesbrecht, Dr. David Saunders, and Dr. Cameron Stewart for serving on the thesis committee.
Fast parallel algorithms for matrix reduction to normal forms
 IN ENGINEERING, COMMUNICATION AND CONTROL
, 1997
"... We investigate fast parallel algorithms to compute normal forms of matrices and the corresponding transformations. Given a matrix B in M (K), where K is an arbitrary commutative field, we establish that computing a similarity transformation P such that F"P��BP is in Frobenius normal form can be done ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We investigate fast parallel algorithms to compute normal forms of matrices and the corresponding transformations. Given a matrix B in M (K), where K is an arbitrary commutative field, we establish that computing a similarity transformation P such that F"P��BP is in Frobenius normal form can be done in NC�. Using a reduction to this first problem, a similar fact is then proved for the Smith normal form S(x) of a polynomial matrix A(x) inM (K[x]); to compute unimodular matrices º(x) and »(x) such that S(x)"º(x)A(x)»(x) can be done in NC�. We get that over concrete fields such as the rationals, these problems are in NC². Using our previous results we have thus established that the problems of computing transformations over a field extension for the Jordan normal form, and transformations over the input field for the Frobenius and the Smith normal form are all in NC�. As a corollary we establish a polynomialtime sequential algorithm to compute transformations for the Smith form over K[x].
Triangular Factorization of Polynomial Matrices
, 2000
"... ultiple of a determinant. The modular arithmetic avoids exponential growth of intermediate expressions ([4, 5, 7, 10, 17]). ffl Coefficient methods for polynomialmatrices, translating the Hermite problem to a problem over the coeficient ring ([2, 13, 18]). Some of these algorithms use fast matrix ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
ultiple of a determinant. The modular arithmetic avoids exponential growth of intermediate expressions ([4, 5, 7, 10, 17]). ffl Coefficient methods for polynomialmatrices, translating the Hermite problem to a problem over the coeficient ring ([2, 13, 18]). Some of these algorithms use fast matrix and/or fast polynomial arithmetic to improve their complexity. The best known complexity result is O(n ` (nd) 1+ffl ) field operations, where ` is the exponent for matrix multiplication ([17]). The algorithm we present here belongs to the first class. Unlike most methods in this class we do not follow the greedy approach of Gaussian elimination to triangularize the input matrix. By using lattice basis reduction a la the Popov form (see below) we can guarantee good bounds on the degrees of intermediate polynomials; an amortized analysis establishes the worst case