Results 1  10
of
18
Euclidean algorithms are Gaussian
, 2003
"... Abstract. We prove a Central Limit Theorem for a general class of costparameters associated to the three standard Euclidean algorithms, with optimal speed of convergence, and error terms for the mean and variance. For the most basic parameter of the algorithms, the number of steps, we go further an ..."
Abstract

Cited by 28 (12 self)
 Add to MetaCart
(Show Context)
Abstract. We prove a Central Limit Theorem for a general class of costparameters associated to the three standard Euclidean algorithms, with optimal speed of convergence, and error terms for the mean and variance. For the most basic parameter of the algorithms, the number of steps, we go further and prove a Local Limit Theorem (LLT), with speed of convergence O((log N) −1/4+ǫ). This extends and improves the LLT obtained by Hensley [27] in the case of the standard Euclidean algorithm. We use a “dynamical analysis ” methodology, viewing an algorithm as a dynamical system (restricted to rational inputs), and combining tools imported from dynamics, such as the crucial transfer operators, with various other techniques: Dirichlet series, Perron’s formula, quasipowers theorems, the saddle point method. Dynamical analysis had previously been used to perform averagecase analysis of algorithms. For the present (dynamical) analysis in distribution, we require precise estimates on the transfer operators, when a parameter varies along vertical lines in the complex plane. Such estimates build on results obtained only recently by Dolgopyat in the context of continuoustime dynamics [20]. 1.
Dynamical analysis of αEuclidean algorithms
 J. Algorithms
"... Abstract We study a class of Euclidean algorithms related to divisions where the remainder is constrained to belong to [α −1, α], for some α ∈ [0, 1]. The paper is devoted to the averagecase analysis of these algorithms, in terms of number of steps or bitcomplexity. This is a new instance of the s ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
Abstract We study a class of Euclidean algorithms related to divisions where the remainder is constrained to belong to [α −1, α], for some α ∈ [0, 1]. The paper is devoted to the averagecase analysis of these algorithms, in terms of number of steps or bitcomplexity. This is a new instance of the socalled "dynamical analysis" method, where dynamical systems are made a deep use of. Here, the dynamical systems of interest have an infinite number of branches and they are not Markovian, so that the general framework of dynamical analysis is more complex to adapt to this case than previously. 2002 Elsevier Science (USA). All rights reserved.
Euclidean dynamics
 DISCRETE AND CONTINUOUS DYNAMICAL SYSTEMS
, 2006
"... We study a general class of Euclidean algorithms which compute the greatest common divisor [gcd], and we perform probabilistic analyses of their main parameters. We view an algorithm as a dynamical system restricted to rational inputs, and combine tools imported from dynamics, such as transfer ope ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
We study a general class of Euclidean algorithms which compute the greatest common divisor [gcd], and we perform probabilistic analyses of their main parameters. We view an algorithm as a dynamical system restricted to rational inputs, and combine tools imported from dynamics, such as transfer operators, with various tools of analytic combinatorics: generating functions, Dirichlet series, Tauberian theorems, Perron’s formula and quasipowers theorems. Such dynamical analyses can be used to perform the averagecase analysis of algorithms, but also (dynamical) analysis in distribution.
Exponential inequalities and functional estimations for weak dependent data ; applications to dynamical systems
 Stochastics and Dynamics
"... Abstract. We estimate density and regression functions for weak dependant datas. Using an exponential inequality obtained in [DeP] and in [Mau2], we control the deviation between the estimator and the function itself. These results are applied to a large class of dynamical systems and lead to esti ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We estimate density and regression functions for weak dependant datas. Using an exponential inequality obtained in [DeP] and in [Mau2], we control the deviation between the estimator and the function itself. These results are applied to a large class of dynamical systems and lead to estimations of invariant densities and on the mapping itself. Dynamical systems are widely used by scientists to modalize complex systems ([ABST]). Therefore, estimating functions related to dynamical systems is crucial. Of particular interest are : the invariant density, the mapping itself, the pressure function. We shall see that many dynamical systems have the same behavior as weak dependant processes (as defined in [DoLo]). We obtain results of deviation for regression functions and densities for weak dependant processes and apply these results to dynamical systems. In [Mau2] we gave an estimation (with control of the deviation) of the pressure function for some expanding maps of the interval. Results on the estimation of the invariant density for the same kind of maps where obtained in [P] and stated in [Mae]. In this later article, results on the estimation of the mapping were also stated. These last two papers dealt with convergence in quadratic mean. Our goal here is, on one hand, to consider more general dynamical systems : non uniformly hyperbolic maps on the interval, dynamics in higher dimension ... On the second hand, we obtain bounds on the deviation between the estimator and the regression function as well as almost sure convergence. Related results on the estimation of the regression function may also be found in [FV] and [Mas] where strongly mixing processes are considered and almost everywhere convergence and asymptotic normality are proved. Our aim is to provide an estimation of the deviation between the estimator and the regression function for a larger class of mixing processes and for regression functions that may have singularities. Before giving the precise definitions and results, let us state our main results informally. We consider a weak dependant stationary process X 0 , ..., X i , ... taking values in Σ ⊂ R d . Our condition of weak dependence is with respect to a Banach space C of bounded functions on Σ (see Definition 1 below). Let (Y i ) i∈N be a stationary process taking values in R and satisfying a condition of weak dependence according with the process X 0 , ..., X i , ... Consider the 2000 Mathematics Subject Classification. 37A50, 60E15, 37D20.
A LOCAL LIMIT THEOREM WITH SPEED OF CONVERGENCE FOR EUCLIDEAN ALGORITHMS AND DIOPHANTINE COSTS
, 2007
"... Abstract. For large N, we consider the ordinary continued fraction of x = p/q with 1 ≤ p ≤ q ≤ N, or, equivalently, Euclid’s gcd algorithm for two integers 1 ≤ p ≤ q ≤ N, putting the uniform distribution on the set of p and qs. We study the distribution of the total cost of execution of the algorith ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. For large N, we consider the ordinary continued fraction of x = p/q with 1 ≤ p ≤ q ≤ N, or, equivalently, Euclid’s gcd algorithm for two integers 1 ≤ p ≤ q ≤ N, putting the uniform distribution on the set of p and qs. We study the distribution of the total cost of execution of the algorithm for an additive cost function c on the set Z ∗ + of possible digits, asymptotically for N → ∞. If c is nonlattice and satisfies mild growth conditions, the local limit theorem was proved previously by the second named author. Introducing diophantine conditions on the cost, we are able to control the speed of convergence in the local limit theorem. We use previous estimates of the first author and Vallée, and we adapt to our setting bounds of Dolgopyat and Melbourne on transfer operators. Our diophantine condition is generic. For smooth enough observables (depending on the diophantine condition) we attain the optimal speed.
Analysis of fast versions of the Euclid Algorithm
 Proceedings of ANALCO’07, Janvier 2007
"... There exist fast variants of the gcd algorithm which are all based on principles due to Knuth and Schönhage. On inputs of size n, these algorithms use a Divide and Conquer approach, perform FFT multiplications and stop the recursion at a depth slightly smaller than lg n. A rough estimate of the wors ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
There exist fast variants of the gcd algorithm which are all based on principles due to Knuth and Schönhage. On inputs of size n, these algorithms use a Divide and Conquer approach, perform FFT multiplications and stop the recursion at a depth slightly smaller than lg n. A rough estimate of the worst–case complexity of these fast versions provides the bound O(n(log n) 2 log log n). However, this estimate is based on some heuristics and is not actually proven. Here, we provide a precise probabilistic analysis of some of these fast variants, and we prove that their average bit–complexity on random inputs of size n is Θ(n(log n) 2 log log n), with a precise remainder term. We view such a fast algorithm as a sequence of what we call interrupted algorithms, and we obtain three results about the (plain) Euclid Algorithm which may be of independent interest. We precisely describe the evolution of the distribution during the execution of the (plain) Euclid Algorithm; we obtain a sharp estimate for the probability that all the quotients produced by the (plain) Euclid Algorithm are small enough; we also exhibit a strong regularity phenomenon, which proves that these interrupted algorithms are locally “similar ” to the total algorithm. This finally leads to the precise evaluation of the average bit–complexity of these fast algorithms. This work uses various tools, and is based on a precise study of generalised transfer operators related to the dynamical system underlying the Euclid Algorithm. 1
Statistical properties of Markov dynamical sources: applications to information theory
 Discrete Math. Theor. Comput. Sci
"... In (V1), the author studies statistical properties of words generated by dynamical sources. This is done using generalized Ruelle operators. The aim of this article is to generalize the notion of sources for which the results hold. First, we avoid the use of Grothendieck theory and Fredholm determin ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In (V1), the author studies statistical properties of words generated by dynamical sources. This is done using generalized Ruelle operators. The aim of this article is to generalize the notion of sources for which the results hold. First, we avoid the use of Grothendieck theory and Fredholm determinants, this allows dynamical sources that cannot be extended to a complex disk or that are not analytic. Second, we consider Markov sources: the language generated by the source over an alphabet M is not necessarily M ∗.
FINE COSTS FOR THE EUCLID ALGORITHM ON POLYNOMIALS AND FAREY MAPS
"... Abstract. This paper studies digitcost functions for the Euclid algorithm on polynomials with coefficients in a finite field, in terms of the number of operations performed on the finite field Fq. The usual bitcomplexity is defined with respect to the degree of the quotients; we focus here on a no ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. This paper studies digitcost functions for the Euclid algorithm on polynomials with coefficients in a finite field, in terms of the number of operations performed on the finite field Fq. The usual bitcomplexity is defined with respect to the degree of the quotients; we focus here on a notion of ‘fine ’ complexity (and on associated costs) which relies on the number of their nonzero coefficients. It also considers and compares the ergodic behavior of the corresponding costs for truncated trajectories under the action of the Gauss map acting on the set of formal power series with coefficients in a finite field. The present paper is thus mainly interested in the study of the probabilistic behavior of the corresponding random variables: average estimates (expectation and variance) are obtained in a purely combinatorial way thanks to classical methods in combinatorial analysis (more precisely, bivariate generating functions); some of our costs are even proved to satisfy an asymptotic Gaussian law. We also relate this study with a Farey algorithm which is a refinement of the continued fraction algorithm for the set of formal power series with coefficients in a finite field: this algorithm discovers ‘step by step ’ each nonzero monomial of the quotient, so its number of steps is closely related to the number of nonzero coefficients. In particular, this map is shown to admit a finite invariant measure in contrast with the real case. This version of the Farey map also produces mediant convergents in the continued fraction expansion of formal power series with coefficients in a finite field.