Results 1  10
of
15
Euclidean algorithms are Gaussian
, 2003
"... Abstract. We prove a Central Limit Theorem for a general class of costparameters associated to the three standard Euclidean algorithms, with optimal speed of convergence, and error terms for the mean and variance. For the most basic parameter of the algorithms, the number of steps, we go further an ..."
Abstract

Cited by 23 (10 self)
 Add to MetaCart
(Show Context)
Abstract. We prove a Central Limit Theorem for a general class of costparameters associated to the three standard Euclidean algorithms, with optimal speed of convergence, and error terms for the mean and variance. For the most basic parameter of the algorithms, the number of steps, we go further and prove a Local Limit Theorem (LLT), with speed of convergence O((log N) −1/4+ǫ). This extends and improves the LLT obtained by Hensley [27] in the case of the standard Euclidean algorithm. We use a “dynamical analysis ” methodology, viewing an algorithm as a dynamical system (restricted to rational inputs), and combining tools imported from dynamics, such as the crucial transfer operators, with various other techniques: Dirichlet series, Perron’s formula, quasipowers theorems, the saddle point method. Dynamical analysis had previously been used to perform averagecase analysis of algorithms. For the present (dynamical) analysis in distribution, we require precise estimates on the transfer operators, when a parameter varies along vertical lines in the complex plane. Such estimates build on results obtained only recently by Dolgopyat in the context of continuoustime dynamics [20]. 1.
A LOCAL LIMIT THEOREM WITH SPEED OF CONVERGENCE FOR EUCLIDEAN ALGORITHMS AND DIOPHANTINE COSTS
, 2007
"... Abstract. For large N, we consider the ordinary continued fraction of x = p/q with 1 ≤ p ≤ q ≤ N, or, equivalently, Euclid’s gcd algorithm for two integers 1 ≤ p ≤ q ≤ N, putting the uniform distribution on the set of p and qs. We study the distribution of the total cost of execution of the algorith ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract. For large N, we consider the ordinary continued fraction of x = p/q with 1 ≤ p ≤ q ≤ N, or, equivalently, Euclid’s gcd algorithm for two integers 1 ≤ p ≤ q ≤ N, putting the uniform distribution on the set of p and qs. We study the distribution of the total cost of execution of the algorithm for an additive cost function c on the set Z ∗ + of possible digits, asymptotically for N → ∞. If c is nonlattice and satisfies mild growth conditions, the local limit theorem was proved previously by the second named author. Introducing diophantine conditions on the cost, we are able to control the speed of convergence in the local limit theorem. We use previous estimates of the first author and Vallée, and we adapt to our setting bounds of Dolgopyat and Melbourne on transfer operators. Our diophantine condition is generic. For smooth enough observables (depending on the diophantine condition) we attain the optimal speed.
Euclidean dynamics
 Discrete and Continuous Dynamical Systems
"... Abstract. We study a general class of Euclidean algorithms which compute the greatest common divisor [gcd], and we perform probabilistic analyses of their main parameters. We view an algorithm as a dynamical system restricted to rational inputs, and combine tools imported from dynamics, such as tran ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We study a general class of Euclidean algorithms which compute the greatest common divisor [gcd], and we perform probabilistic analyses of their main parameters. We view an algorithm as a dynamical system restricted to rational inputs, and combine tools imported from dynamics, such as transfer operators, with various tools of analytic combinatorics: generating functions, Dirichlet series, Tauberian theorems, Perron’s formula and quasipowers theorems. Such dynamical analyses can be used to perform the averagecase analysis of algorithms, but also (dynamical) analysis in distribution. 1. Introduction. Computing the Greatest Common Divisor [Gcd
Statistical properties of Markov dynamical sources: applications to information theory
 Discrete Math. Theor. Comput. Sci
"... In (V1), the author studies statistical properties of words generated by dynamical sources. This is done using generalized Ruelle operators. The aim of this article is to generalize the notion of sources for which the results hold. First, we avoid the use of Grothendieck theory and Fredholm determin ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In (V1), the author studies statistical properties of words generated by dynamical sources. This is done using generalized Ruelle operators. The aim of this article is to generalize the notion of sources for which the results hold. First, we avoid the use of Grothendieck theory and Fredholm determinants, this allows dynamical sources that cannot be extended to a complex disk or that are not analytic. Second, we consider Markov sources: the language generated by the source over an alphabet M is not necessarily M ∗.
Analysis of fast versions of the Euclid Algorithm
 Proceedings of ANALCO’07, Janvier 2007
"... There exist fast variants of the gcd algorithm which are all based on principles due to Knuth and Schönhage. On inputs of size n, these algorithms use a Divide and Conquer approach, perform FFT multiplications and stop the recursion at a depth slightly smaller than lg n. A rough estimate of the wors ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
There exist fast variants of the gcd algorithm which are all based on principles due to Knuth and Schönhage. On inputs of size n, these algorithms use a Divide and Conquer approach, perform FFT multiplications and stop the recursion at a depth slightly smaller than lg n. A rough estimate of the worst–case complexity of these fast versions provides the bound O(n(log n) 2 log log n). However, this estimate is based on some heuristics and is not actually proven. Here, we provide a precise probabilistic analysis of some of these fast variants, and we prove that their average bit–complexity on random inputs of size n is Θ(n(log n) 2 log log n), with a precise remainder term. We view such a fast algorithm as a sequence of what we call interrupted algorithms, and we obtain three results about the (plain) Euclid Algorithm which may be of independent interest. We precisely describe the evolution of the distribution during the execution of the (plain) Euclid Algorithm; we obtain a sharp estimate for the probability that all the quotients produced by the (plain) Euclid Algorithm are small enough; we also exhibit a strong regularity phenomenon, which proves that these interrupted algorithms are locally “similar ” to the total algorithm. This finally leads to the precise evaluation of the average bit–complexity of these fast algorithms. This work uses various tools, and is based on a precise study of generalised transfer operators related to the dynamical system underlying the Euclid Algorithm. 1
Preface
"... The main justification for this book is that there have been significant advances in continued fractions over the past decade, but these remain for the most part scattered across the literature, and under the heading of topics from algebraic number theory to theoretical plasma physics. We now have a ..."
Abstract
 Add to MetaCart
The main justification for this book is that there have been significant advances in continued fractions over the past decade, but these remain for the most part scattered across the literature, and under the heading of topics from algebraic number theory to theoretical plasma physics. We now have a better understanding of the rate at which assorted continued fraction or greatest common denominator (gcd) algorithms complete their tasks. The number of steps required to complete a gcd calculation, for instance, has a Gaussian normal distribution. We know a lot more about badly approximable numbers. There are several related threads here. A badly approximable number is a number x such that {qp − qx: p, q ∈ Z and q ̸ = 0} is bounded below by a positive constant; badly approximable numbers have continued fraction expansions with bounded partial quotients, and so we are led to consider a kind of Cantor set EM consisting of all x ∈ [0, 1] such that the partial quotients of x are bounded above by M. The notion of a badly approximable rational number has the ring of crank mathematics, but it is quite natural to study the set of rationals r with partial quotients bounded by M. The number of such rationals with denominators up to n, say, turns out to be closely related to the Hausdorff dimension of EM, (comparable to n2dimEM) which is in turn related to the spectral radius of linear operators LM,s, acting on some suitably chosen space of functions f, and given by LM,sf(t) = ∑m k=1 (k + t) −sf(1/(k + t)). Similar operators have been studied by, among others, David Ruelle, in connection with theoretical onedimensional plasmas, and they are related to entropy. Alongside these developments there has been a dramatic increase in the computational power available to investigators. This has been helpful on the theoretical side, as one is more likely to seek a proof for a result when,
DOI: 10.1214/07AIHP140 c ○ Association des Publications de l’Institut Henri Poincaré, 2008
, 2007
"... www.imstat.org/aihp ..."
(Show Context)
HAUSDORFF DIMENSION OF REAL NUMBERS WITH BOUNDED DIGIT AVERAGES
"... Abstract. This paper considers numeration schemes, defined in terms of dynamical systems and studies the set of reals which obey some constraints on their digits. In this general setting, (almost) all sets have zero Lebesgue measure, even though the nature of the constraints and the numeration schem ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. This paper considers numeration schemes, defined in terms of dynamical systems and studies the set of reals which obey some constraints on their digits. In this general setting, (almost) all sets have zero Lebesgue measure, even though the nature of the constraints and the numeration schemes are very different. Sets of zero measure appear in many areas of science, and Hausdorff dimension has shown to be an appropriate tool for studying their nature. Classically, the studied constraints involve each digit in an independent way. Here, more conditions are studied, which only provide (additive) constraints on each digit prefix. The main example of interest deals with reals whose all the digit prefix averages in their continued fraction expansion are bounded by M. More generally, a weight function is defined on the digits, and the weighted average of each prefix has to be bounded by M. This setting can be translated in terms of random walks where each step performed depends on the present digit, and walks under study are constrained to be always under a line of slope M. We first provide a characterization of the Hausdorff dimension sM, in terms of the dominant eigenvalue of the weighted transfer operator relative to the dynamical system, in a quite general setting. We then come back to our main example; With the previous characterization at hand and use of the Mellin Transform, we exhibit the behaviour of sM − 1  when the bound M becomes large. Even if this study seems closely related to previous works in Multifractal Analysis, it is in a sense complementary, because it uses weights on digits which grow faster and deals with different methods.
Generalized Pattern Matching Statistics
"... Keywords: Averagecase analysis of algorithms, Algorithms on words, Dynamical systems and dynamical analysis. 1 Introduction Various pattern matching problems. String matching is the basic pattern matching problem. Here, a string w is a sequence of symbols w = w1w2: : : ws (of length s), and one sea ..."
Abstract
 Add to MetaCart
(Show Context)
Keywords: Averagecase analysis of algorithms, Algorithms on words, Dynamical systems and dynamical analysis. 1 Introduction Various pattern matching problems. String matching is the basic pattern matching problem. Here, a string w is a sequence of symbols w = w1w2: : : ws (of length s), and one searches for occurrences of w (as a block of consecutive symbols) in a text T. However, there are several useful generalizations of this basic problem: Set of patterns. In the classical string matching problem, the pattern w should appear exactly (and consecutively) in the text, while, in the approximate case, a few mismatches are considered acceptable. The approximate string matching is then expressed as matching against a set L of words that contains all the valid approximations of the initial string.