Results 1  10
of
62
A Gröbner free alternative for polynomial system solving
 Journal of Complexity
, 2001
"... Given a system of polynomial equations and inequations with coefficients in the field of rational numbers, we show how to compute a geometric resolution of the set of common roots of the system over the field of complex numbers. A geometric resolution consists of a primitive element of the algebraic ..."
Abstract

Cited by 82 (17 self)
 Add to MetaCart
Given a system of polynomial equations and inequations with coefficients in the field of rational numbers, we show how to compute a geometric resolution of the set of common roots of the system over the field of complex numbers. A geometric resolution consists of a primitive element of the algebraic extension defined by the set of roots, its minimal polynomial and the parametrizations of the coordinates. Such a representation of the solutions has a long history which goes back to Leopold Kronecker and has been revisited many times in computer algebra. We introduce a new generation of probabilistic algorithms where all the computations use only univariate or bivariate polynomials. We give a new codification of the set of solutions of a positive dimensional algebraic variety relying on a new global version of Newton’s iterator. Roughly speaking the complexity of our algorithm is polynomial in some kind of degree of the system, in its height, and linear in the complexity of evaluation
On The Complexity Of Computing Determinants
 COMPUTATIONAL COMPLEXITY
, 2001
"... We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n n matrix A with integer entries in (n and (n bi ..."
Abstract

Cited by 47 (17 self)
 Add to MetaCart
We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n n matrix A with integer entries in (n and (n bit operations; here denotes the largest entry in absolute value and the exponent adjustment by "+o(1)" captures additional factors for positive real constants C 1 , C 2 , C 3 . The bit complexity (n results from using the classical cubic matrix multiplication algorithm. Our algorithms are randomized, and we can certify that the output is the determinant of A in a Las Vegas fashion. The second category of problems deals with the setting where the matrix A has elements from an abstract commutative ring, that is, when no divisions in the domain of entries are possible. We present algorithms that deterministically compute the determinant, characteristic polynomial and adjoint of A with n and O(n ) ring additions, subtractions and multiplications.
Linear recurrences with polynomial coefficients and computation of the CartierManin operator on hyperelliptic curves
 In International Conference on Finite Fields and Applications (Toulouse
, 2004
"... Abstract. We study the complexity of computing one or several terms (not necessarily consecutive) in a recurrence with polynomial coefficients. As applications, we improve the best currently known upper bounds for factoring integers deterministically and for computing the Cartier–Manin operator of h ..."
Abstract

Cited by 22 (8 self)
 Add to MetaCart
Abstract. We study the complexity of computing one or several terms (not necessarily consecutive) in a recurrence with polynomial coefficients. As applications, we improve the best currently known upper bounds for factoring integers deterministically and for computing the Cartier–Manin operator of hyperelliptic curves.
Solving the Pell Equation
, 2008
"... We illustrate recent developments in computational number theory by studying their implications for solving the Pell equation. We shall see that, if the solutions to the Pell equation are properly represented, the traditional continued fraction method for solving the equation can be significantly a ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
We illustrate recent developments in computational number theory by studying their implications for solving the Pell equation. We shall see that, if the solutions to the Pell equation are properly represented, the traditional continued fraction method for solving the equation can be significantly accelerated. The most promising method depends on the use of smooth numbers. As with many algorithms depending on smooth numbers, its run time can presently only conjecturally be established; giving a rigorous analysis is one of the many open problems surrounding the Pell equation.
Efficient Rational Number Reconstruction
 Journal of Symbolic Computation
, 1994
"... this paper we describe how a variant of the algorithm in Jebelean [6] can be so adapted. In Section 2 we review the problem of rational reconstruction and the solution proposed by Wang, while fixing some notation and terminology along the way. We also discuss certain errors that have appeared in the ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
this paper we describe how a variant of the algorithm in Jebelean [6] can be so adapted. In Section 2 we review the problem of rational reconstruction and the solution proposed by Wang, while fixing some notation and terminology along the way. We also discuss certain errors that have appeared in the literature. Section 3 describes a multiprecision Euclidean algorithm for computing gcds that will be the basis of our algorithm. In Section 4 we discuss our algorithm and various details that are essential for an efficient implementation. 2 Reconstructing Rational Numbers
Computing the Rank and a Small Nullspace Basis of a Polynomial Matrix
, 2005
"... We reduce the problem of computing the rank and a nullspace basis of a univariate polynomial matrix to polynomial matrix multiplication. For an input n×n matrix of degree d over a field K we give a rank and nullspace algorithm using about the same number of operations as for multiplying two matrices ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
We reduce the problem of computing the rank and a nullspace basis of a univariate polynomial matrix to polynomial matrix multiplication. For an input n×n matrix of degree d over a field K we give a rank and nullspace algorithm using about the same number of operations as for multiplying two matrices of dimension n and degree d. If the latter multiplication is done in MM(n, d) = O˜(n ω d) operations, with ω the exponent of matrix multiplication over K, then the algorithm uses O˜(MM(n, d)) operations in K. For m×n matrices of rank r and degree d, the cost expression is O˜(nmr ω−2 d). The softO notation O ˜ indicates some missing logarithmic factors. The method is randomized with Las Vegas certification. We achieve our results in part through a combination of matrix Hensel highorder lifting and matrix minimal fraction reconstruction, and through the computation of minimal or small degree vectors in the nullspace seen as a K[x]module.
Maximal quotient rational reconstruction: an almost optimal algorithm for rational reconstruction
 Proceedings of ISSAC ’04, ACM Press
, 2004
"... Let n/d ∈ Q, mbe a positive integer and let u = n/d mod m. Thus u is the image of a rational number modulo m. The rational reconstruction problem is; given u and m find n/d. A solution was first given by Wang in 1981. Wang’s algorithm outputs n/d when m>2M 2 where M =max(n,d). Because of the wi ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
Let n/d ∈ Q, mbe a positive integer and let u = n/d mod m. Thus u is the image of a rational number modulo m. The rational reconstruction problem is; given u and m find n/d. A solution was first given by Wang in 1981. Wang’s algorithm outputs n/d when m>2M 2 where M =max(n,d). Because of the wide application of this algorithm in computer algebra, several authors have investigated its practical efficiency and asymptotic time complexity. In this paper we present a new solution which is almost optimal in the following sense; with controllable high probability, our algorithm will output n/d when m is a modest number of bits longer than 2nd. This means that in a modular algorithm where m is a product of primes, the modular algorithm will need one or two primes more than the minimum necessary to reconstruct n/d; thusifn  ≪d or d ≪n the new algorithm saves up to half the number of primes. Further, our algorithm will fail with high probability when m<2nd.
A GMPbased implementation of SchönhageStrassen’s large integer multiplication algorithm
 In Proceedings of ISSAC’07
, 2007
"... Abstract. SchönhageStrassen’s algorithm is one of the best known algorithms for multiplying large integers. Implementing it efficiently is of utmost importance, since many other algorithms rely on it as a subroutine. We present here an improved implementation, based on the one distributed within th ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
Abstract. SchönhageStrassen’s algorithm is one of the best known algorithms for multiplying large integers. Implementing it efficiently is of utmost importance, since many other algorithms rely on it as a subroutine. We present here an improved implementation, based on the one distributed within the GMP library. The following ideas and techniques were used or tried: faster arithmetic modulo 2 n + 1, improved cache locality, Mersenne transforms, Chinese Remainder Reconstruction, the √ 2 trick, Harley’s and Granlund’s tricks, improved tuning. We also discuss some ideas we plan to try in the future.
An LLLreduction algorithm with quasilinear time complexity
, 2010
"... Abstract. We devise an algorithm, e L 1, with the following specifications: It takes as input an arbitrary basis B = (bi)i ∈ Z d×d of a Euclidean lattice L; It computes a basis of L which is reduced for a mild modification of the LenstraLenstraLovász reduction; It terminates in time O(d 5+ε β + d ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
Abstract. We devise an algorithm, e L 1, with the following specifications: It takes as input an arbitrary basis B = (bi)i ∈ Z d×d of a Euclidean lattice L; It computes a basis of L which is reduced for a mild modification of the LenstraLenstraLovász reduction; It terminates in time O(d 5+ε β + d ω+1+ε β 1+ε) where β = log max ‖bi ‖ (for any ε> 0 and ω is a valid exponent for matrix multiplication). This is the first LLLreducing algorithm with a time complexity that is quasilinear in β and polynomial in d. The backbone structure of e L 1 is able to mimic the KnuthSchönhage fast gcd algorithm thanks to a combination of cuttingedge ingredients. First the bitsize of our lattice bases can be decreased via truncations whose validity are backed by recent numerical stability results on the QR matrix factorization. Also we establish a new framework for analyzing unimodular transformation matrices which reduce shifts of reduced bases, this includes bitsize control and new perturbation tools. We illustrate the power of this framework by generating a family of reduction algorithms. 1