Results 1  10
of
28
On The Complexity Of Computing Determinants
 COMPUTATIONAL COMPLEXITY
, 2001
"... We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n n matrix A with integer entries in (n and (n bi ..."
Abstract

Cited by 63 (21 self)
 Add to MetaCart
(Show Context)
We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n n matrix A with integer entries in (n and (n bit operations; here denotes the largest entry in absolute value and the exponent adjustment by "+o(1)" captures additional factors for positive real constants C 1 , C 2 , C 3 . The bit complexity (n results from using the classical cubic matrix multiplication algorithm. Our algorithms are randomized, and we can certify that the output is the determinant of A in a Las Vegas fashion. The second category of problems deals with the setting where the matrix A has elements from an abstract commutative ring, that is, when no divisions in the domain of entries are possible. We present algorithms that deterministically compute the determinant, characteristic polynomial and adjoint of A with n and O(n ) ring additions, subtractions and multiplications.
Linbox: A Generic Library For Exact Linear Algebra
, 2002
"... Base Class pointers Concrete Field virtual functions Field Archetype Linbox field archetype Figure 1: Black box design. The LinBox black box matrix archetype is simpler than the field archetype because the design constraints are less stringent. As with the field type, we need a common object ..."
Abstract

Cited by 39 (12 self)
 Add to MetaCart
Base Class pointers Concrete Field virtual functions Field Archetype Linbox field archetype Figure 1: Black box design. The LinBox black box matrix archetype is simpler than the field archetype because the design constraints are less stringent. As with the field type, we need a common object interface to describe how algorithms are to access black box matrices, but it only requires functions to access the matrix's dimensions and to apply the matrix or its transpose to a vector. Thus our black box matrix archetype is simply an abstract class, and all actual black box matrices are subclasses of the archetype class. We note that the overhead involved with this inheritance mechanism is negligible in comparison with the execution time of the methods, unlike for our field element types.
Computing the Rank and a Small Nullspace Basis of a Polynomial Matrix
, 2005
"... We reduce the problem of computing the rank and a nullspace basis of a univariate polynomial matrix to polynomial matrix multiplication. For an input n×n matrix of degree d over a field K we give a rank and nullspace algorithm using about the same number of operations as for multiplying two matrices ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
(Show Context)
We reduce the problem of computing the rank and a nullspace basis of a univariate polynomial matrix to polynomial matrix multiplication. For an input n×n matrix of degree d over a field K we give a rank and nullspace algorithm using about the same number of operations as for multiplying two matrices of dimension n and degree d. If the latter multiplication is done in MM(n, d) = O˜(n ω d) operations, with ω the exponent of matrix multiplication over K, then the algorithm uses O˜(MM(n, d)) operations in K. For m×n matrices of rank r and degree d, the cost expression is O˜(nmr ω−2 d). The softO notation O ˜ indicates some missing logarithmic factors. The method is randomized with Las Vegas certification. We achieve our results in part through a combination of matrix Hensel highorder lifting and matrix minimal fraction reconstruction, and through the computation of minimal or small degree vectors in the nullspace seen as a K[x]module.
Efficient computation of the characteristic polynomial
 Proceedings of the 2005 International Symposium on Symbolic and Algebraic Computation
, 2005
"... We deal with the computation of the characteristic polynomial of dense matrices over word size finite fields and over the integers. We first present two algorithms for finite fields: one is based on Krylov iterates and Gaussian elimination. We compare it to an improvement of the second algorithm of ..."
Abstract

Cited by 18 (13 self)
 Add to MetaCart
(Show Context)
We deal with the computation of the characteristic polynomial of dense matrices over word size finite fields and over the integers. We first present two algorithms for finite fields: one is based on Krylov iterates and Gaussian elimination. We compare it to an improvement of the second algorithm of KellerGehrig. Then we show that a generalization of KellerGehrig’s third algorithm could improve both complexity and computational time. We use these results as a basis for the computation of the characteristic polynomial of integer matrices. We first use early termination and Chinese remaindering for dense matrices. Then a probabilistic approach, based on integer minimal polynomial and Hensel factorization, is particularly well suited to sparse and/or structured matrices.
An optimal bloom filter replacement based on matrix solving
 In CSR
, 2009
"... We suggest a method for holding a dictionary data structure, which maps keys to values, in the spirit of Bloom Filters. The space requirements of the dictionary we suggest are much smaller than those of a hashtable. We allow storing n keys, each mapped to value which is a string of k bits. Our sugge ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
(Show Context)
We suggest a method for holding a dictionary data structure, which maps keys to values, in the spirit of Bloom Filters. The space requirements of the dictionary we suggest are much smaller than those of a hashtable. We allow storing n keys, each mapped to value which is a string of k bits. Our suggested method requires nk + o(n) bits space to store the dictionary, and O(n) time to produce the data structure, and allows answering a membership query in O(1) memory probes. The dictionary size does not depend on the size of the keys. However, reducing the space requirements of the data structure comes at a certain cost. Our dictionary has a small probability of a one sided error. When attempting to obtain the value for a key that is stored in the dictionary we always get the correct answer. However, when testing for membership of an element that is not stored in the dictionary, we may get an incorrect answer, and when requesting the value of such an element we may get a certain random value. Our method is based on solving equations in GF(2 k) and using several hash functions. Another significant advantage of our suggested method is that we do not require using sophisticated hash functions. We only require pairwise independent hash functions. We also suggest a data structure that requires only nk bits space, has O(n 2) preprocessing time, and has a O(log n) query time. However, this data structures requires a uniform hash functions. In order replace a Bloom Filter of n elements with an error proability of 2 −k, we require nk + o(n) memory bits, O(1) query time, O(n) preprocessing time, and only pairwise independent hash function. Even the most advanced previously known Bloom Filter would require nk +O(n) space, and a uniform hash functions, so our method is significantly less space consuming especially when k is small. Our suggested dictionary can replace Bloom Filters, and has many applications. A few application examples are dictionaries for storing bad passwords, differential files in databases, Internet caching and distributed storage systems. 1 1
An OutputSensitive Variant of the Baby Steps/Giant Steps Determinant Algorithm
, 2001
"... This paper provides an adaptive version of the unblocked baby steps/giant steps algorithm [20, Section 2]. The result is most easily stated when b # where # is the determinant to be computed and # with 1 is not known. Note that by Hadamard's bound # # n(b + log 2 (n)/2), so # = ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
This paper provides an adaptive version of the unblocked baby steps/giant steps algorithm [20, Section 2]. The result is most easily stated when b # where # is the determinant to be computed and # with 1 is not known. Note that by Hadamard's bound # # n(b + log 2 (n)/2), so # = 0 covers the worst case. We describe a Monte Carlo algorithm that produces # in (n bit operations, again with standard matrix arithmetic. The corresponding bit complexity of the early termination Gaussian elimination method is 4# , which is always more, and that of the algorithm by [10] is (n 1+1/2
Computation of Discrete Logarithms in ...
, 607
"... We describe in this article how we have been able to extend the record for computations of discrete logarithms in characteristic 2 from the previous record over F 2 503 to a newer mark of F 2 607 , using Coppersmith's algorithm. This has been made possible by several practical improvements ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
We describe in this article how we have been able to extend the record for computations of discrete logarithms in characteristic 2 from the previous record over F 2 503 to a newer mark of F 2 607 , using Coppersmith's algorithm. This has been made possible by several practical improvements to the algorithm. Although the computations have been carried out on fairly standard hardware, our opinion is that we are nearing the current limits of the manageable sizes for this algorithm, and that going substantially further will require deeper improvements to the method.
Computing the Rank of Large Sparse Matrices over Finite Fields
"... We want to achieve efficient exact computations, such as the rank, of sparse matrices over finite fields. We therefore compare the practical behaviors, on a wide range of sparse matrices of the deterministic Gaussian elimination technique, using reordering heuristics, with the probabilistic, blackbo ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
We want to achieve efficient exact computations, such as the rank, of sparse matrices over finite fields. We therefore compare the practical behaviors, on a wide range of sparse matrices of the deterministic Gaussian elimination technique, using reordering heuristics, with the probabilistic, blackbox, Wiedemann algorithm. Indeed, we prove here that the latter is the fastest iterative variant of the Krylov methods to compute the minimal polynomial or the rank of a sparse matrix.