Results 1  10
of
45
A Gröbner free alternative for polynomial system solving
 Journal of Complexity
, 2001
"... Given a system of polynomial equations and inequations with coefficients in the field of rational numbers, we show how to compute a geometric resolution of the set of common roots of the system over the field of complex numbers. A geometric resolution consists of a primitive element of the algebraic ..."
Abstract

Cited by 80 (16 self)
 Add to MetaCart
Given a system of polynomial equations and inequations with coefficients in the field of rational numbers, we show how to compute a geometric resolution of the set of common roots of the system over the field of complex numbers. A geometric resolution consists of a primitive element of the algebraic extension defined by the set of roots, its minimal polynomial and the parametrizations of the coordinates. Such a representation of the solutions has a long history which goes back to Leopold Kronecker and has been revisited many times in computer algebra. We introduce a new generation of probabilistic algorithms where all the computations use only univariate or bivariate polynomials. We give a new codification of the set of solutions of a positive dimensional algebraic variety relying on a new global version of Newton’s iterator. Roughly speaking the complexity of our algorithm is polynomial in some kind of degree of the system, in its height, and linear in the complexity of evaluation
Nearly Optimal Algorithms For Canonical Matrix Forms
, 1993
"... A Las Vegas type probabilistic algorithm is presented for finding the Frobenius canonical form of an n x n matrix T over any field K. The algorithm requires O~(MM(n)) = MM(n) (log n) ^ O(1) operations in K, where O(MM(n)) operations in K are sufficient to multiply two n x n matrices over K. This nea ..."
Abstract

Cited by 54 (11 self)
 Add to MetaCart
A Las Vegas type probabilistic algorithm is presented for finding the Frobenius canonical form of an n x n matrix T over any field K. The algorithm requires O~(MM(n)) = MM(n) (log n) ^ O(1) operations in K, where O(MM(n)) operations in K are sufficient to multiply two n x n matrices over K. This nearly matches the lower bound of \Omega(MM(n)) operations in K for this problem, and improves on the O(n^4) operations in K required by the previously best known algorithms. We also demonstrate a fast parallel implementation of our algorithm for the Frobenius form, which is processorefficient on a PRAM. As an application we give an algorithm to evaluate a polynomial g(x) in K[x] at T which requires only O~(MM(n)) operations in K when deg g < n^2. Other applications include sequential and parallel algorithms for computing the minimal and characteristic polynomials of a matrix, the rational Jordan form of a matrix, for testing whether two matrices are similar, and for matrix powering, which are substantially faster than those previously known.
Greatest Common Divisors of Polynomials Given by StraightLine Programs
 J. ACM
, 1988
"... . F Algorithms on multivariate polynomials represented by straightline programs are developed irst it is shown that most algebraic algorithms can be probabilistically applied to data that is given by y r a straightline computation. Testing such rational numeric data for zero, for instance, is faci ..."
Abstract

Cited by 51 (17 self)
 Add to MetaCart
. F Algorithms on multivariate polynomials represented by straightline programs are developed irst it is shown that most algebraic algorithms can be probabilistically applied to data that is given by y r a straightline computation. Testing such rational numeric data for zero, for instance, is facilitated b andom evaluations modulo random prime numbers. Then auxiliary algorithms are constructed that a determine the coefficients of a multivariate polynomial in a single variable. The first main result is an lgorithm that produces the greatest common divisor of the input polynomials, all in straightline r a representation. The second result shows how to find a straightline program for the reduced numerato nd denominator from one for the corresponding rational function. Both the algorithm for that conl c struction and the greatest common divisor algorithm are in random polynomialtime for the usua oefficient fields and output a straightline program, which with controllably high probab...
On the complexity of polynomial matrix computations
 Proceedings of the 2003 International Symposium on Symbolic and Algebraic Computation
, 2003
"... ..."
Random Butterfly Transformations with Applications in Computational Linear Algebra
, 1995
"... Theory and practice of computational linear algebra differ over the issue of degeneracy. Block matrix decompositions are used heavily in theory, but less in practice, since even when a matrix is nondegenerate (has full rank) its block submatrices can be degenerate. The potential degeneracy of block ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
Theory and practice of computational linear algebra differ over the issue of degeneracy. Block matrix decompositions are used heavily in theory, but less in practice, since even when a matrix is nondegenerate (has full rank) its block submatrices can be degenerate. The potential degeneracy of block submatrices can completely prevent practical use of block matrix algorithms. Gaussian elimination is an important example of an algorithm affected by the possibility of degeneracy. While the basic elimination procedure is simple to state and implement, it becomes more complicated with the addition of a pivoting procedure, which handles degenerate matrices having zeros on the diagonal. Pivoting can significantly complicate the algorithm, increase data movement, and reduce speed, particularly on highperformance computers. We propose a randomization scheme that preconditions an input matrix by multiplying it with random matrices, where this multiplication can be performed efficiently. At the e...
Maximum matchings in planar graphs via Gaussian elimination
 ALGORITHMICA
, 2004
"... We present a randomized algorithm for finding maximum matchings in planar graphs in time O(n ω/2), where ω is the exponent of the best known matrix multiplication algorithm. Since ω < 2.38, this algorithm breaks through the O(n 1.5) barrier for the matching problem. This is the first result of this ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
We present a randomized algorithm for finding maximum matchings in planar graphs in time O(n ω/2), where ω is the exponent of the best known matrix multiplication algorithm. Since ω < 2.38, this algorithm breaks through the O(n 1.5) barrier for the matching problem. This is the first result of this kind for general planar graphs. We also present an algorithm for generating perfect matchings in planar graphs uniformly at random using O(n ω/2) arithmetic operations. Our algorithms are based on the Gaussian elimination approach to maximum matchings introduced in [1].
Essentially optimal computation of the inverse of generic polynomial matrices
 J. Complexity
, 2004
"... We present an inversion algorithm for nonsingular n n matrices whose entries are degree d polynomials over a field. The algorithm is deterministic and, when n is a power of two, requires O B ðn 3 dÞ field operations for a generic input; the softO notation O B indicates some missinglogðndÞ factors. ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
We present an inversion algorithm for nonsingular n n matrices whose entries are degree d polynomials over a field. The algorithm is deterministic and, when n is a power of two, requires O B ðn 3 dÞ field operations for a generic input; the softO notation O B indicates some missinglogðndÞ factors. Up to such logarithmic factors, this asymptotic complexity is of the same order as the number of distinct field elements necessary to represent the inverse matrix.
Computing the sign or the value of the determinant of an integer matrix, a complexity survey
 JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS
, 2004
"... Computation of the sign of the determ9vSof amWDz[ and the determBvSitself is a challenge for both numhvB5q and exact mactv;W We survey the comWzqDvS of existingmistin to solve these problem when the input is an nnm;9q; A with integer entries. We study the bitcomvSW5[5Wv of the algorithm asymithmW5[ ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
Computation of the sign of the determ9vSof amWDz[ and the determBvSitself is a challenge for both numhvB5q and exact mactv;W We survey the comWzqDvS of existingmistin to solve these problem when the input is an nnm;9q; A with integer entries. We study the bitcomvSW5[5Wv of the algorithm asymithmW5[5 n and thenorm of A. Existing approaches rely onnumBDWzv approximW[ comoximW[z5 on exactcomvB[;5vSW or on both types of arithmvSW incom9zqvSWqD c 2003 Elsevier B.V. All rights reserved. Keywords: Determ9v9vm Bitcom9vmv Integer mteger Approxim59 comoxim599 Exact comv55DvS Random5D algorithm 1. I517251716 Com;vm9 the sign or the value of thedetermDvS; nmBqq A is a classicalproblem Numblem mmble are usually focused oncomBW9v the sign via an accurateapproxim;B99 of the determvS;;5 Amer the applications areimvWD;qW problem ofcom9qWvS;;5[ geom9qW that can be reduced to the determ5vS; question; the readerma refer to [11,12,9,10,46,45] and to the bibliography therein. InsymDW;B comW;BvS55 theproblem ofcomDzWv the exact value of the ThismisvqB; is based on work supported in part by the National Science Foundation under grants Nrs. DMS9977392, CCR9988177, and CCR0113121 (Kaltofen) and by the Centre National de la Recherche Scienti#que, Actions Incitatives No. 5929 et STIC LINBOX 2001 (Villard).
Determinant sums for undirected hamiltonicity
 in Prof. of FOCS’10, 2010
"... We present a Monte Carlo algorithm for Hamiltonicity detection in an nvertex undirected graph running in O ∗ (1.657 n) time. To the best of our knowledge, this is the first superpolynomial improvement on the worst case runtime for the problem since the O ∗ (2 n) bound established for TSP almost fif ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
We present a Monte Carlo algorithm for Hamiltonicity detection in an nvertex undirected graph running in O ∗ (1.657 n) time. To the best of our knowledge, this is the first superpolynomial improvement on the worst case runtime for the problem since the O ∗ (2 n) bound established for TSP almost fifty years ago (Bellman 1962, Held and Karp 1962). It answers in part the first open problem in Woeginger’s 2003 survey on exact algorithms for NPhard problems. For bipartite graphs, we improve the bound to O ∗ (1.414 n) time. Both the bipartite and the general algorithm can be implemented to use space polynomial in n. We combine several recently resurrected ideas to get the results. Our main technical contribution is a new reduction inspired by the algebraic sieving method for kPath (Koutis ICALP 2008, Williams IPL 2009). We introduce the Labeled Cycle Cover Sum in which weareset tocount weightedarclabeled cycle coversoverafinite field ofcharacteristic two. We reduce Hamiltonicity to Labeled Cycle Cover Sum and apply the determinant summation technique for Exact Set Covers (Björklund STACS 2010) to evaluate it. 1
Faster Algorithms for Integer Lattice Basis Reduction
, 1996
"... The well known L³reduction algorithm of Lov'asz transforms a given integer lattice basis b1 ; b2 ; : : : ; bn 2 ZZ n into a reduced basis. The cost of L 3 reduction is O(n 4 log Bo) arithmetic operations with integers bounded in length by O(n log Bo) bits. Here, Bo bounds the Euclidean leng ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
The well known L³reduction algorithm of Lov'asz transforms a given integer lattice basis b1 ; b2 ; : : : ; bn 2 ZZ n into a reduced basis. The cost of L 3 reduction is O(n 4 log Bo) arithmetic operations with integers bounded in length by O(n log Bo) bits. Here, Bo bounds the Euclidean length of the input vectors, that is, Bo jb1 j 2 ; jb2 j 2 ; : : : ; jbn j 2 . We present a simple modification of the L³reduction algorithm that requires only O(n³ log Bo) arithmetic operations with integers of the same length. We gain a further speedup by combining our new approach with Schonhage's modification of the L³reduction algorithm and incorporating fast matrix mutliplication techniques. The result is an algorithm for semireduction that requires O(n 2:381 log Bo ) arithmetic operations with integers of the same length.