Results 1  10
of
12
Subquadratictime factoring of polynomials over finite fields
 Math. Comp
, 1998
"... Abstract. New probabilistic algorithms are presented for factoring univariate polynomials over finite fields. The algorithms factor a polynomial of degree n over a finite field of constant cardinality in time O(n 1.815). Previous algorithms required time Θ(n 2+o(1)). The new algorithms rely on fast ..."
Abstract

Cited by 68 (11 self)
 Add to MetaCart
Abstract. New probabilistic algorithms are presented for factoring univariate polynomials over finite fields. The algorithms factor a polynomial of degree n over a finite field of constant cardinality in time O(n 1.815). Previous algorithms required time Θ(n 2+o(1)). The new algorithms rely on fast matrix multiplication techniques. More generally, to factor a polynomial of degree n over the finite field Fq with q elements, the algorithms use O(n 1.815 log q) arithmetic operations in Fq. The new “baby step/giant step ” techniques used in our algorithms also yield new fast practical algorithms at superquadratic asymptotic running time, and subquadratictime methods for manipulating normal bases of finite fields. 1.
Construction of secure random curves of genus 2 over prime fields
 Advances in Cryptology – EUROCRYPT 2004, volume 3027 of Lecture Notes in Comput. Sci
, 2004
"... Abstract. For counting points of Jacobians of genus 2 curves defined over large prime fields, the best known method is a variant of Schoof’s algorithm. We present several improvements on the algorithms described by Gaudry and Harley in 2000. In particular we rebuild the symmetry that had been broken ..."
Abstract

Cited by 37 (12 self)
 Add to MetaCart
Abstract. For counting points of Jacobians of genus 2 curves defined over large prime fields, the best known method is a variant of Schoof’s algorithm. We present several improvements on the algorithms described by Gaudry and Harley in 2000. In particular we rebuild the symmetry that had been broken by the use of Cantor’s division polynomials and design a faster division by 2 and a division by 3. Combined with the algorithm by Matsuo, Chao and Tsujii, our implementation can count the points on a Jacobian of size 164 bits within about one week on a PC. 1
Fast polynomial factorization and modular composition ∗
, 2008
"... We obtain randomized algorithms for factoring degree n univariate polynomials over Fq requiring O(n 1.5+o(1) log 1+o(1) q + n 1+o(1) log 2+o(1) q) bit operations. When log q < n, this is asymptotically faster than the best previous algorithms (von zur Gathen & Shoup (1992) and Kaltofen & Shoup (1998 ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We obtain randomized algorithms for factoring degree n univariate polynomials over Fq requiring O(n 1.5+o(1) log 1+o(1) q + n 1+o(1) log 2+o(1) q) bit operations. When log q < n, this is asymptotically faster than the best previous algorithms (von zur Gathen & Shoup (1992) and Kaltofen & Shoup (1998)); for log q ≥ n, it matches the asymptotic running time of the best known algorithms. The improvements come from new algorithms for modular composition of degree n univariate polynomials, which is the asymptotic bottleneck in fast algorithms for factoring polynomials over finite fields. The best previous algorithms for modular composition use O(n (ω+1)/2) field operations, where ω is the exponent of matrix multiplication (Brent & Kung (1978)), with a slight improvement in the exponent achieved by employing fast rectangular matrix multiplication (Huang & Pan (1997)). We show that modular composition and multipoint evaluation of multivariate polynomials are essentially equivalent, in the sense that an algorithm for one achieving exponent α implies an algorithm for the other with exponent α + o(1), and vice versa. We then give two new algorithms that solve the problem optimally (up to lower order terms): an algebraic algorithm for fields of characteristic at most n o(1) , and a nonalgebraic algorithm that works in arbitrary characteristic. The latter algorithm works by
Algebraic algorithms
"... This article, along with [Elkadi and Mourrain 1996], explain the correlation between residue theory and the Dixon matrix, which yields an alternative method for studying and approximating all common solutions. In 1916, Macaulay [1916] constructed a matrix whose determinant is a multiple of the class ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This article, along with [Elkadi and Mourrain 1996], explain the correlation between residue theory and the Dixon matrix, which yields an alternative method for studying and approximating all common solutions. In 1916, Macaulay [1916] constructed a matrix whose determinant is a multiple of the classical resultant for n homogeneous polynomials in n variables. The Macaulay matrix si16 multaneously generalizes the Sylvester matrix and the coefficient matrix of a system of linear equations [Kapur and Lakshman Y. N. 1992]. As the Dixon formulation, the Macaulay determinant is a multiple of the resultant. Macaulay, however, proved that a certain minor of his matrix divides the matrix determinant so as to yield the exact resultant in the case of generic homogeneous polynomials. Canny [1990] has invented a general method that perturbs any polynomial system and extracts a nontrivial projection operator. Using recent results pertaining to sparse polynomial systems [Gelfand et al. 1994, Sturmfels 1991], a matrix formula for computing the sparse resultant of n + 1 polynomials in n variables was given by Canny and Emiris [1993] and consequently improved in [Canny and Pedersen 1993, Emiris and Canny 1995]. The determinant of the sparse resultant matrix, like the Macaulay and Dixon matrices, only yields a projection operation, not the exact resultant. Here, sparsity means that only certain monomials in each of the n + 1 polynomials have nonzero coefficients. Sparsity is measured in geometric terms, namely, by the Newton polytope
Taking Roots over High Extensions of Finite Fields
"... We present a new algorithm for computing mth roots over the finite field Fq, where q = p n, with p a prime, and m any positive integer. In the particular case m = 2, the cost of the new algorithm is an expected O(M(n) log(p) + C(n) log(n)) operations in Fp, where M(n) and C(n) are bounds for the co ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present a new algorithm for computing mth roots over the finite field Fq, where q = p n, with p a prime, and m any positive integer. In the particular case m = 2, the cost of the new algorithm is an expected O(M(n) log(p) + C(n) log(n)) operations in Fp, where M(n) and C(n) are bounds for the cost of polynomial multiplication and modular polynomial composition. Known results give M(n) = O(n log(n) log log(n)) and C(n) = O(n 1.67), so our algorithm is subquadratic in n.
Key management: Towards the design of efficient, lightweight schemes for secure group communications in large Mobile Ad Hoc Networks
, 2006
"... Securing group communications in resource constrained, infrastructureless environments such as Mobile Ad Hoc Networks (MANETs) is a very challenging research direction in the area of wireless networking and security. This is true as MANETs are emerging as the desired environment for an increasing n ..."
Abstract
 Add to MetaCart
Securing group communications in resource constrained, infrastructureless environments such as Mobile Ad Hoc Networks (MANETs) is a very challenging research direction in the area of wireless networking and security. This is true as MANETs are emerging as the desired environment for an increasing number of civilian, commercial and military applications, addressing an increasing number of users. Most of these applications are sensitive and require specific security guarantees. The inherent limitations of MANETs impose major difficulties in establishing a suitable secure group communications framework. Key Management (KM) is the operation that enables and supports the secure exchange of data and ensures the capability of members ’ secure cooperation as a group. KM protocols provide a common symmetric group key to all group members, and ensure that only legitimate members have access to a valid group key at any instance. Our work focuses on the design of efficient, robust, novel or improved group KM schemes, capable of distributed operation where key infrastructure components are absent or inaccessible, that accomplish the following: (a) better performance than this
Quartz, an asymmetric signature scheme for short signatures on PC  Primitive specification and supporting documentation
, 2001
"... This document specifies the updated final version of the Quartz signature scheme, slightly modified as allowed in the second stage of Nessie evaluation process, in order to improve the speed and the security. In some papers that refer to the old version, it is sometimes called Quartz is the new ..."
Abstract
 Add to MetaCart
This document specifies the updated final version of the Quartz signature scheme, slightly modified as allowed in the second stage of Nessie evaluation process, in order to improve the speed and the security. In some papers that refer to the old version, it is sometimes called Quartz is the new version. This is therefore the only o#cial version of Quartz. We note that the key generation has not changed, the signature computation has changed, and the signature verification has changed slightly. In the Appendix of the present document we summarize all the changes to Quartz, for readers and developers that are acquainted with the previous version. It also includes an explanation why these changes has been made
Languages, Algorithms
"... We present here transalpyne, a scripting language, to be executed on top of a computer algebra system, that is specifically conceived for automatic transposition of linear functions. Its type system is able to automatically infer all the possible linear functions realized by a computer program. The ..."
Abstract
 Add to MetaCart
We present here transalpyne, a scripting language, to be executed on top of a computer algebra system, that is specifically conceived for automatic transposition of linear functions. Its type system is able to automatically infer all the possible linear functions realized by a computer program. The key feature of transalpyne is its ability to transform a computer program computing a linear function in another computer program computing the transposed linear function. The time and space complexity of the resulting program are similar to the original ones.