Results 1  10
of
34
On efficient sparse integer matrix Smith normal form computations
, 2001
"... We present a new algorithm to compute the Integer Smith normal form of large sparse matrices. We reduce the computation of the Smith form to independent, and therefore parallel, computations modulo powers of wordsize primes. Consequently, the algorithm does not suffer from coefficient growth. W ..."
Abstract

Cited by 37 (15 self)
 Add to MetaCart
We present a new algorithm to compute the Integer Smith normal form of large sparse matrices. We reduce the computation of the Smith form to independent, and therefore parallel, computations modulo powers of wordsize primes. Consequently, the algorithm does not suffer from coefficient growth. We have implemented several variants of this algorithm (Elimination and/or BlackBox techniques) since practical performance depends strongly on the memory available. Our method has proven useful in algebraic topology for the computation of the homology of some large simplicial complexes.
Asymptotically Fast Computation of Hermite Normal Forms of Integer Matrices
 Proc. Int'l. Symp. on Symbolic and Algebraic Computation: ISSAC '96
, 1996
"... This paper presents a new algorithm for computing the Hermite normal form H of an A 2 ZZ n\Thetam of rank m together with a unimodular premultiplier matrix U such that UA = H. Our algorithm requires O~(m `\Gamma1 nM(m log jjAjj)) bit operations to produce both H and U . Here, jjAjj = max ij j ..."
Abstract

Cited by 36 (9 self)
 Add to MetaCart
This paper presents a new algorithm for computing the Hermite normal form H of an A 2 ZZ n\Thetam of rank m together with a unimodular premultiplier matrix U such that UA = H. Our algorithm requires O~(m `\Gamma1 nM(m log jjAjj)) bit operations to produce both H and U . Here, jjAjj = max ij jA ij j, M(t) bit operations are sufficient to multiply two dtebit integers, and ` is the exponent for matrix multiplication over rings: two m \Theta m matrices over a ring R can be multiplied in O(m ` ) ring operations from R. The previously fastest algorithm of Hafner & McCurley requires O~(m 2 nM(m log jjAjj)) bit operations to produce H, but does not produce a unimodular matrix U which satisfies UA = H. Previous methods require on the order of O~(n 3 M(m log jjAjj)) bit operations to produce a U  our algorithm improves on this significantly in both a theoretical and practical sense. 1 Introduction A fundamental notion for matrices over rings is left equivalence. Two n \Th...
Parallel algorithms for matrix normal forms. Linear Algebra and its Applications 136
, 1990
"... Here we offer a new randomized parallel algorithm that determines the Smith normal form of a matrix with entries being univariate polynomials with coefficients in an arbitrary field. The algorithm has two important advantages over our previous one: the multipliers relating the Smith form to the inpu ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
Here we offer a new randomized parallel algorithm that determines the Smith normal form of a matrix with entries being univariate polynomials with coefficients in an arbitrary field. The algorithm has two important advantages over our previous one: the multipliers relating the Smith form to the input matrix are computed, and the algorithm is probabilistic of Las Veg as type, i.e., always finds the correct answer. The Smith form algorithm is also a good sequential algorithm. Our algorithm reduces the problem of Smith form computation to two Hermite form computations. Thus the Smith form problem has complexity asymptotically that of the Hermite form problem. We also construct fast parallel algorithms for Jordan normal form and testing similarity of matrices. Both the similarity and nonsimilarity problems are in the complexity class RNC for the usual coefficient fields, i.e., they can be probabilistically decided in polylogarithmic time using polynomially many processors. 1. Introduction. The different normal forms of matrices, Hermite, Smith and Jordan Normal Forms are widely used in many different branches of science and engineering. Sequential algorithms for
On Lattice Reduction for Polynomial Matrices
 Journal of Symbolic Computation
, 2000
"... A simple algorithm for transformation to weak Popov form  essentially lattice reduction for polynomial matrices  is described and analyzed. The algorithm is adapted and applied to various tasks involving polynomial matrices: rank profile and determinant computation; unimodular triangular factori ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
A simple algorithm for transformation to weak Popov form  essentially lattice reduction for polynomial matrices  is described and analyzed. The algorithm is adapted and applied to various tasks involving polynomial matrices: rank profile and determinant computation; unimodular triangular factorization; transformation to Hermite and Popov canonical form; rational and diophantine linear system solving; short vector computation.
On Computing the Homology Type of a Triangulation
, 1994
"... :We analyze an algorithm for computing the homology type of a triangulation. By triangulation we mean a finite simplicial complex; its homology type is given by its homology groups (with integer coefficients). The algorithm could be used in computeraided design to tell whether two finiteelement me ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
:We analyze an algorithm for computing the homology type of a triangulation. By triangulation we mean a finite simplicial complex; its homology type is given by its homology groups (with integer coefficients). The algorithm could be used in computeraided design to tell whether two finiteelement meshes or B'ezierspline surfaces are of the same "topological type," and whether they can be embedded in R³. Homology computation is a purely combinatorial problem of considerable intrinsic interest. While the worstcase bounds we obtain for this algorithm are poor, we argue that many triangulations (in general) and virtually all triangulations in design are very "sparse," in a sense we make precise. We formalize this sparseness measure, and perform a probabilistic analysis of the sparse case to show that the expected running time of the algorithm is roughly quadratic in the geometric complexity (number of simplices) and linear in the dimension.
Computing Hermite and Smith Normal Forms of Triangular Integer Matrices
 Linear Algebra Appl
, 1996
"... This paper considers the problem of transforming a triangular integer input matrix to canonical Hermite and Smith normal form. We provide algorithms and prove deterministic running times for both transformation problems that are linear (hence optimal) in the matrix dimension. The algorithms are easi ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
This paper considers the problem of transforming a triangular integer input matrix to canonical Hermite and Smith normal form. We provide algorithms and prove deterministic running times for both transformation problems that are linear (hence optimal) in the matrix dimension. The algorithms are easily implemented, assume standard integer multiplication, and admit excellent performance in practice. The results presented here lead to faster practical algorithms for computing the Hermite and Smith normal form of an arbitrary (non triangular) integer input matrix. 1 Introduction It follows from Hermite [Her51] that any m \Theta n rank n integer matrix A can be transformed using a sequence of integer row operations to an upper triangular matrix H that has jth diagonal entry h j positive for 1 j n and offdiagonal entries ¯ h ij satisfying 0 ¯ h ij ! h j for 1 i ! j n. The matrix H  called the Hermite normal form of A  always exists and is unique. In this paper we consider the...
Toric Intersection Theory for Affine Root Counting
 Journal of Pure and Applied Algebra
, 1997
"... Given any polynomial system with xed monomial term structure, we give explicit formulae for the generic number of roots (over any algebraically closed eld) with specied coordinate vanishing restrictions. For the case of ane space minus an arbitrary union of coordinate hyperplanes, these formulae ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
Given any polynomial system with xed monomial term structure, we give explicit formulae for the generic number of roots (over any algebraically closed eld) with specied coordinate vanishing restrictions. For the case of ane space minus an arbitrary union of coordinate hyperplanes, these formulae are also the tightest possible upper bounds on the number of isolated roots. We also characterize, in terms of sparse resultants, precisely when these upper bounds are attained. Finally, we reformulate and extend some of the prior combinatorial results of the author on which subsets of coecients must be chosen generically for our formulae to be exact. Our underlying framework provides a new toric variety setting for computational intersection theory in ane space minus an arbitrary union of coordinate hyperplanes. We thus show that, at least for root counting, it is better to work in a naturally associated toric compactication instead of always resorting to products of projective spaces. 1.
Normal Forms for General Polynomial Matrices
, 2001
"... We present an algorithm for the computation of a shifted Popov Normal Form of a rectangular polynomial matrix. For speci c input shifts, we obtain methods for computing the matrix greatest common divisor of two matrix polynomials (in normal form) or such polynomial normal form computation as the ..."
Abstract

Cited by 16 (11 self)
 Add to MetaCart
We present an algorithm for the computation of a shifted Popov Normal Form of a rectangular polynomial matrix. For speci c input shifts, we obtain methods for computing the matrix greatest common divisor of two matrix polynomials (in normal form) or such polynomial normal form computation as the classical Popov form and the Hermite Normal Form.
On the Hardness of the Shortest Vector Problem
, 1998
"... An ndimensional lattice is the set of all integral linear combinations of n linearly independent vectors in R^m. One of the most studied algorithmic problems on lattices is the shortest vector problem (SVP): given a lattice, find the shortest nonzero vector in it. We prove that the shortest vector ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
An ndimensional lattice is the set of all integral linear combinations of n linearly independent vectors in R^m. One of the most studied algorithmic problems on lattices is the shortest vector problem (SVP): given a lattice, find the shortest nonzero vector in it. We prove that the shortest vector problem is NPhard (for randomized reductions) to approximate within some constant factor greater than 1 in any norm l_p (p>1). In particular, we prove the NPhardness of approximating SVP in the Euclidean norm within any factor less than sqrt 2. The same NPhardness results hold for deterministic nonuniform reductions. A deterministic uniform reduction is also given under a reasonable number theoretic conjecture concerning the distribution of smooth numbers. In proving the NPhardness of SVP we develop a number of technical tools that might be of independent interest. In particular, a lattice packing is constructed with the property that the number of unit spheres contained in an ndimensional ball of radius greater then 1 + (sqrt 2) grows exponentially in n, a new constructive version of Sauer's lemma(a combinatorial result somehow related to the notion of VCdimension) is presented, considerably simplifying all previously known constructions.