Results 1  10
of
14
An analytic approach to smooth polynomials over finite fields
 in Algorithmic Number Theory: Third Intern. Symp., ANTSIII
, 1998
"... Abstract. We consider the largest degrees that occur in the decomposition of polynomials over finite fields into irreducible factors. We expand the range of applicability of the Dickman function as an approximation for the number of smooth polynomials, which provides precise estimates for the discr ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Abstract. We consider the largest degrees that occur in the decomposition of polynomials over finite fields into irreducible factors. We expand the range of applicability of the Dickman function as an approximation for the number of smooth polynomials, which provides precise estimates for the discrete logarithm problem. In addition, we characterize the distribution of the two largest degrees of irreducible factors, a problem relevant to polynomial factorization. As opposed to most earlier treatments, our methods are based on a combination of exact descriptions by generating functions and a specific complex asymptotic method. 1
Computing in groups of Lie type
 Math. Comp
, 2001
"... Abstract. We describe two methods for computing with the elements of untwisted groups of Lie type: using the Steinberg presentation and using highest weight representations. We give algorithms for element arithmetic within the Steinberg presentation. Conversion between this presentation and linear r ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Abstract. We describe two methods for computing with the elements of untwisted groups of Lie type: using the Steinberg presentation and using highest weight representations. We give algorithms for element arithmetic within the Steinberg presentation. Conversion between this presentation and linear representations is achieved using a new generalisation of row and column reduction. 1.
A Multilevel Blocking Distinctdegree Factorization Algorithm
 CONTEMPORARY MATHEMATICS
, 2008
"... We give a new algorithm for performing the distinctdegree factorization of a polynomial P(x) over GF(2), using a multilevel blocking strategy. The coarsest level of blocking replaces GCD computations by multiplications, as suggested by Pollard (1975), von zur Gathen and Shoup (1992), and others. ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
We give a new algorithm for performing the distinctdegree factorization of a polynomial P(x) over GF(2), using a multilevel blocking strategy. The coarsest level of blocking replaces GCD computations by multiplications, as suggested by Pollard (1975), von zur Gathen and Shoup (1992), and others. The novelty of our approach is that a finer level of blocking replaces multiplications by squarings, which speeds up the computation in GF(2)[x]/P(x) of certain interval polynomials when P(x) is sparse. As an application we give a fast algorithm to search for all irreducible trinomials x r + x s + 1 of degree r over GF(2), while producing a certificate that can be checked in less time than the full search. Naive algorithms cost O(r 2) per trinomial, thus O(r 3) to search over all trinomials of given degree r. Under a plausible assumption about the distribution of factors of trinomials, the new algorithm has complexity O(r 2 (log r) 3/2 (log log r) 1/2) for the search over all trinomials of degree r. Our implementation achieves a speedup of greater than a factor of 560 over the naive algorithm in the case r = 24036583 (a Mersenne exponent). Using our program, we have found two new primitive trinomials of degree 24036583 over GF(2) (the previous record degree was 6972593).
The average lengths of the factors of the standard factorization of Lyndon words
"... A nonempty word w of {a, b}* is a Lyndon word if and only if it is strictly smaller for the lexicographical order than any of its proper suffixes. Such a word w is either a letter or admits a standard factorization uv where v is its smallest proper suffix. For any Lyndon word v, we show that the se ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
A nonempty word w of {a, b}* is a Lyndon word if and only if it is strictly smaller for the lexicographical order than any of its proper suffixes. Such a word w is either a letter or admits a standard factorization uv where v is its smallest proper suffix. For any Lyndon word v, we show that the set of Lyndon words having v as right factor of the standard factorization is rational and compute explicitly the associated generating function. Next we establish that, for the uniform distribution over the Lyndon words of length n, the average length of the right factor v of the standard factorization is asymptotically 3n/4.
A Rigorous Proof Of The Waterloo Algorithm For The Discrete Logarithm Problem
"... In this paper we are concerned with the Waterloo variant of the index calculus method for the discrete logarithm problem in F 2 n . We provide a rigorous proof for the heuristic arguments for the running time of the Waterloo algorithm. This implies in studying the behavior of pairs of coprime smooth ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper we are concerned with the Waterloo variant of the index calculus method for the discrete logarithm problem in F 2 n . We provide a rigorous proof for the heuristic arguments for the running time of the Waterloo algorithm. This implies in studying the behavior of pairs of coprime smooth polynomials over finite fields. Our proof involves a double saddle point method, and it is in nature similar to the one of Odlyzko for the rigorous analysis of the basic index calculus.
In
"... Abstract. We count the number of irreducible polynomials in several variables of a given degree over a finite field. The results are expressed in terms of a generating series, an exact formula and an asymptotic approximation. We also consider the case of the multidegree and the case of indecomposab ..."
Abstract
 Add to MetaCart
Abstract. We count the number of irreducible polynomials in several variables of a given degree over a finite field. The results are expressed in terms of a generating series, an exact formula and an asymptotic approximation. We also consider the case of the multidegree and the case of indecomposable polynomials. 1.
Abstract Analytic Combinatorics— A Calculus of Discrete Structures
"... The efficiency of many discrete algorithms crucially depends on quantifying properties of large structured combinatorial configurations. We survey methods of analytic combinatorics that are simply based on the idea of associating numbers to atomic elements that compose combinatorial structures, then ..."
Abstract
 Add to MetaCart
The efficiency of many discrete algorithms crucially depends on quantifying properties of large structured combinatorial configurations. We survey methods of analytic combinatorics that are simply based on the idea of associating numbers to atomic elements that compose combinatorial structures, then examining the geometry of the resulting functions. In this way, an operational calculus of discrete structures emerges. Applications to basic algorithms, data structures, and the theory of random discrete structures are outlined. 1 Algorithms and Random Structures A prime factor in choosing the best algorithm for a given computational task is efficiency with respect to the resources consumed, for instance, auxiliary storage, execution time, amount of communication needed. For a given algorithm A, such a complexity measure being fixed, what is of interest is the relation Size of the problem instance (n) − → Complexity of the algorithm (C), which serves to define the complexity function C(n) ≡ CA(n) of algorithm A. Precisely, this complexity function can be specified in several ways. (i) Worstcase analysis takes C(n) to be the maximum of C over all inputs of size n. This corresponds to a pessimistic scenario, one which is of relevance in critical systems and realtime computing. (ii) Averagecase analysis takes C(n) to be the expected value (average) of C over inputs of size n. The aim is to capture the “typical ” cost of a computational task observed when the algorithm is repeatedly applied to various kinds of data. (iii) Probabilistic analysis takes C(n) to be an indicator of the most likely values of C. Its more general aim is to obtain fine estimates on the probability distribution of C, beyond averagecase analysis.