Results 1  10
of
16
Euclidean algorithms are Gaussian
, 2003
"... Abstract. We prove a Central Limit Theorem for a general class of costparameters associated to the three standard Euclidean algorithms, with optimal speed of convergence, and error terms for the mean and variance. For the most basic parameter of the algorithms, the number of steps, we go further an ..."
Abstract

Cited by 28 (12 self)
 Add to MetaCart
(Show Context)
Abstract. We prove a Central Limit Theorem for a general class of costparameters associated to the three standard Euclidean algorithms, with optimal speed of convergence, and error terms for the mean and variance. For the most basic parameter of the algorithms, the number of steps, we go further and prove a Local Limit Theorem (LLT), with speed of convergence O((log N) −1/4+ǫ). This extends and improves the LLT obtained by Hensley [27] in the case of the standard Euclidean algorithm. We use a “dynamical analysis ” methodology, viewing an algorithm as a dynamical system (restricted to rational inputs), and combining tools imported from dynamics, such as the crucial transfer operators, with various other techniques: Dirichlet series, Perron’s formula, quasipowers theorems, the saddle point method. Dynamical analysis had previously been used to perform averagecase analysis of algorithms. For the present (dynamical) analysis in distribution, we require precise estimates on the transfer operators, when a parameter varies along vertical lines in the complex plane. Such estimates build on results obtained only recently by Dolgopyat in the context of continuoustime dynamics [20]. 1.
Euclidean dynamics
 DISCRETE AND CONTINUOUS DYNAMICAL SYSTEMS
, 2006
"... We study a general class of Euclidean algorithms which compute the greatest common divisor [gcd], and we perform probabilistic analyses of their main parameters. We view an algorithm as a dynamical system restricted to rational inputs, and combine tools imported from dynamics, such as transfer ope ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
We study a general class of Euclidean algorithms which compute the greatest common divisor [gcd], and we perform probabilistic analyses of their main parameters. We view an algorithm as a dynamical system restricted to rational inputs, and combine tools imported from dynamics, such as transfer operators, with various tools of analytic combinatorics: generating functions, Dirichlet series, Tauberian theorems, Perron’s formula and quasipowers theorems. Such dynamical analyses can be used to perform the averagecase analysis of algorithms, but also (dynamical) analysis in distribution.
A LOCAL LIMIT THEOREM WITH SPEED OF CONVERGENCE FOR EUCLIDEAN ALGORITHMS AND DIOPHANTINE COSTS
, 2007
"... Abstract. For large N, we consider the ordinary continued fraction of x = p/q with 1 ≤ p ≤ q ≤ N, or, equivalently, Euclid’s gcd algorithm for two integers 1 ≤ p ≤ q ≤ N, putting the uniform distribution on the set of p and qs. We study the distribution of the total cost of execution of the algorith ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract. For large N, we consider the ordinary continued fraction of x = p/q with 1 ≤ p ≤ q ≤ N, or, equivalently, Euclid’s gcd algorithm for two integers 1 ≤ p ≤ q ≤ N, putting the uniform distribution on the set of p and qs. We study the distribution of the total cost of execution of the algorithm for an additive cost function c on the set Z ∗ + of possible digits, asymptotically for N → ∞. If c is nonlattice and satisfies mild growth conditions, the local limit theorem was proved previously by the second named author. Introducing diophantine conditions on the cost, we are able to control the speed of convergence in the local limit theorem. We use previous estimates of the first author and Vallée, and we adapt to our setting bounds of Dolgopyat and Melbourne on transfer operators. Our diophantine condition is generic. For smooth enough observables (depending on the diophantine condition) we attain the optimal speed.
Statistical properties of Markov dynamical sources: applications to information theory
 Discrete Math. Theor. Comput. Sci
"... In (V1), the author studies statistical properties of words generated by dynamical sources. This is done using generalized Ruelle operators. The aim of this article is to generalize the notion of sources for which the results hold. First, we avoid the use of Grothendieck theory and Fredholm determin ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In (V1), the author studies statistical properties of words generated by dynamical sources. This is done using generalized Ruelle operators. The aim of this article is to generalize the notion of sources for which the results hold. First, we avoid the use of Grothendieck theory and Fredholm determinants, this allows dynamical sources that cannot be extended to a complex disk or that are not analytic. Second, we consider Markov sources: the language generated by the source over an alphabet M is not necessarily M ∗.
Analysis of fast versions of the Euclid Algorithm
 Proceedings of ANALCO’07, Janvier 2007
"... There exist fast variants of the gcd algorithm which are all based on principles due to Knuth and Schönhage. On inputs of size n, these algorithms use a Divide and Conquer approach, perform FFT multiplications and stop the recursion at a depth slightly smaller than lg n. A rough estimate of the wors ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
There exist fast variants of the gcd algorithm which are all based on principles due to Knuth and Schönhage. On inputs of size n, these algorithms use a Divide and Conquer approach, perform FFT multiplications and stop the recursion at a depth slightly smaller than lg n. A rough estimate of the worst–case complexity of these fast versions provides the bound O(n(log n) 2 log log n). However, this estimate is based on some heuristics and is not actually proven. Here, we provide a precise probabilistic analysis of some of these fast variants, and we prove that their average bit–complexity on random inputs of size n is Θ(n(log n) 2 log log n), with a precise remainder term. We view such a fast algorithm as a sequence of what we call interrupted algorithms, and we obtain three results about the (plain) Euclid Algorithm which may be of independent interest. We precisely describe the evolution of the distribution during the execution of the (plain) Euclid Algorithm; we obtain a sharp estimate for the probability that all the quotients produced by the (plain) Euclid Algorithm are small enough; we also exhibit a strong regularity phenomenon, which proves that these interrupted algorithms are locally “similar ” to the total algorithm. This finally leads to the precise evaluation of the average bit–complexity of these fast algorithms. This work uses various tools, and is based on a precise study of generalised transfer operators related to the dynamical system underlying the Euclid Algorithm. 1
FINE COSTS FOR THE EUCLID ALGORITHM ON POLYNOMIALS AND FAREY MAPS
"... Abstract. This paper studies digitcost functions for the Euclid algorithm on polynomials with coefficients in a finite field, in terms of the number of operations performed on the finite field Fq. The usual bitcomplexity is defined with respect to the degree of the quotients; we focus here on a no ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. This paper studies digitcost functions for the Euclid algorithm on polynomials with coefficients in a finite field, in terms of the number of operations performed on the finite field Fq. The usual bitcomplexity is defined with respect to the degree of the quotients; we focus here on a notion of ‘fine ’ complexity (and on associated costs) which relies on the number of their nonzero coefficients. It also considers and compares the ergodic behavior of the corresponding costs for truncated trajectories under the action of the Gauss map acting on the set of formal power series with coefficients in a finite field. The present paper is thus mainly interested in the study of the probabilistic behavior of the corresponding random variables: average estimates (expectation and variance) are obtained in a purely combinatorial way thanks to classical methods in combinatorial analysis (more precisely, bivariate generating functions); some of our costs are even proved to satisfy an asymptotic Gaussian law. We also relate this study with a Farey algorithm which is a refinement of the continued fraction algorithm for the set of formal power series with coefficients in a finite field: this algorithm discovers ‘step by step ’ each nonzero monomial of the quotient, so its number of steps is closely related to the number of nonzero coefficients. In particular, this map is shown to admit a finite invariant measure in contrast with the real case. This version of the Farey map also produces mediant convergents in the continued fraction expansion of formal power series with coefficients in a finite field.
Analysis of Algorithms (AofA): Part II: 1998  2000 ("PrincetonBarcelonaGdansk")
, 2003
"... This article is a continuation of our previous Algorithmic Column [54] (EATCS, 77, 2002) dedicated to activities of the Analysis of Algorithms group during the \Dagstuhl{ Period" (19931997). The rst three meetings took place in Schloss Dagstuhl, Germany. ..."
Abstract
 Add to MetaCart
This article is a continuation of our previous Algorithmic Column [54] (EATCS, 77, 2002) dedicated to activities of the Analysis of Algorithms group during the \Dagstuhl{ Period" (19931997). The rst three meetings took place in Schloss Dagstuhl, Germany.
HAUSDORFF DIMENSION OF REAL NUMBERS WITH BOUNDED DIGIT AVERAGES
"... Abstract. This paper considers numeration schemes, defined in terms of dynamical systems and studies the set of reals which obey some constraints on their digits. In this general setting, (almost) all sets have zero Lebesgue measure, even though the nature of the constraints and the numeration schem ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. This paper considers numeration schemes, defined in terms of dynamical systems and studies the set of reals which obey some constraints on their digits. In this general setting, (almost) all sets have zero Lebesgue measure, even though the nature of the constraints and the numeration schemes are very different. Sets of zero measure appear in many areas of science, and Hausdorff dimension has shown to be an appropriate tool for studying their nature. Classically, the studied constraints involve each digit in an independent way. Here, more conditions are studied, which only provide (additive) constraints on each digit prefix. The main example of interest deals with reals whose all the digit prefix averages in their continued fraction expansion are bounded by M. More generally, a weight function is defined on the digits, and the weighted average of each prefix has to be bounded by M. This setting can be translated in terms of random walks where each step performed depends on the present digit, and walks under study are constrained to be always under a line of slope M. We first provide a characterization of the Hausdorff dimension sM, in terms of the dominant eigenvalue of the weighted transfer operator relative to the dynamical system, in a quite general setting. We then come back to our main example; With the previous characterization at hand and use of the Mellin Transform, we exhibit the behaviour of sM − 1  when the bound M becomes large. Even if this study seems closely related to previous works in Multifractal Analysis, it is in a sense complementary, because it uses weights on digits which grow faster and deals with different methods.
Generalized Pattern Matching Statistics
"... Keywords: Averagecase analysis of algorithms, Algorithms on words, Dynamical systems and dynamical analysis. 1 Introduction Various pattern matching problems. String matching is the basic pattern matching problem. Here, a string w is a sequence of symbols w = w1w2: : : ws (of length s), and one sea ..."
Abstract
 Add to MetaCart
(Show Context)
Keywords: Averagecase analysis of algorithms, Algorithms on words, Dynamical systems and dynamical analysis. 1 Introduction Various pattern matching problems. String matching is the basic pattern matching problem. Here, a string w is a sequence of symbols w = w1w2: : : ws (of length s), and one searches for occurrences of w (as a block of consecutive symbols) in a text T. However, there are several useful generalizations of this basic problem: Set of patterns. In the classical string matching problem, the pattern w should appear exactly (and consecutively) in the text, while, in the approximate case, a few mismatches are considered acceptable. The approximate string matching is then expressed as matching against a set L of words that contains all the valid approximations of the initial string.