Results 1  10
of
24
Analysis of the binary Euclidean algorithm
 Directions and Recent Results in Algorithms and Complexity
, 1976
"... The binary Euclidean algorithm is a variant of the classical Euclidean algorithm. It avoids multiplications and divisions, except by powers of two, so is potentially faster than the classical algorithm on a binary machine. We describe the binary algorithm and consider its average case behaviour. In ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
(Show Context)
The binary Euclidean algorithm is a variant of the classical Euclidean algorithm. It avoids multiplications and divisions, except by powers of two, so is potentially faster than the classical algorithm on a binary machine. We describe the binary algorithm and consider its average case behaviour. In particular, we correct some errors in the literature, discuss some recent results of Vallée, and describe a numerical computation which supports a conjecture of Vallée. 1
Euclidean algorithms are Gaussian
, 2003
"... Abstract. We prove a Central Limit Theorem for a general class of costparameters associated to the three standard Euclidean algorithms, with optimal speed of convergence, and error terms for the mean and variance. For the most basic parameter of the algorithms, the number of steps, we go further an ..."
Abstract

Cited by 28 (12 self)
 Add to MetaCart
(Show Context)
Abstract. We prove a Central Limit Theorem for a general class of costparameters associated to the three standard Euclidean algorithms, with optimal speed of convergence, and error terms for the mean and variance. For the most basic parameter of the algorithms, the number of steps, we go further and prove a Local Limit Theorem (LLT), with speed of convergence O((log N) −1/4+ǫ). This extends and improves the LLT obtained by Hensley [27] in the case of the standard Euclidean algorithm. We use a “dynamical analysis ” methodology, viewing an algorithm as a dynamical system (restricted to rational inputs), and combining tools imported from dynamics, such as the crucial transfer operators, with various other techniques: Dirichlet series, Perron’s formula, quasipowers theorems, the saddle point method. Dynamical analysis had previously been used to perform averagecase analysis of algorithms. For the present (dynamical) analysis in distribution, we require precise estimates on the transfer operators, when a parameter varies along vertical lines in the complex plane. Such estimates build on results obtained only recently by Dolgopyat in the context of continuoustime dynamics [20]. 1.
Speeding up XTR
 In Boyd [29
"... Abstract. This paper describes several speedups and simpli¯cations for XTR. The most important results are new XTR double and single exponentiation methods where the latter requires a cheap precomputation. Both methods are on average more than 60 % faster than the old methods, thus more than doubli ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
(Show Context)
Abstract. This paper describes several speedups and simpli¯cations for XTR. The most important results are new XTR double and single exponentiation methods where the latter requires a cheap precomputation. Both methods are on average more than 60 % faster than the old methods, thus more than doubling the speed of the already fast XTR signature applications. An additional advantage of the new double exponentiation method is that it no longer requires matrices, thereby making XTR easier to implement. Another XTR single exponentiation method is presented that does not require precomputation and that is on average more than 35 % faster than the old method. Existing applications of similar methods to LUC and elliptic curve cryptosystems are reviewed.
Dynamical Analysis of a Class of Euclidean Algorithms
"... We develop a general framework for the analysis of algorithms of a broad Euclidean type. The averagecase complexity of an algorithm is seen to be related to the analytic behaviour in the complex plane of the set of elementary transformations determined by the algorithm. The methods rely on properti ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
(Show Context)
We develop a general framework for the analysis of algorithms of a broad Euclidean type. The averagecase complexity of an algorithm is seen to be related to the analytic behaviour in the complex plane of the set of elementary transformations determined by the algorithm. The methods rely on properties of transfer operators suitably adapted from dynamical systems theory. As a consequence, we obtain precise averagecase analyses of algorithms for evaluating the Jacobi symbol of computational number theory fame, thereby solving conjectures of Bach and Shallit. These methods also provide a unifying framework for the analysis of an entire class of gcdlike algorithms together with new results regarding the probable behaviour of their cost functions. 1
Average BitComplexity of Euclidean Algorithms
 Proceedings ICALP’00, Lecture Notes Comp. Science 1853, 373–387
, 2000
"... We obtain new results regarding the precise average bitcomplexity of five algorithms of a broad Euclidean type. We develop a general framework for analysis of algorithms, where the averagecase complexity of an algorithm is seen to be related to the analytic behaviour in the complex plane of the set ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
(Show Context)
We obtain new results regarding the precise average bitcomplexity of five algorithms of a broad Euclidean type. We develop a general framework for analysis of algorithms, where the averagecase complexity of an algorithm is seen to be related to the analytic behaviour in the complex plane of the set of elementary transformations determined by the algorithms. The methods rely on properties of transfer operators suitably adapted from dynamical systems theory and provide a unifying framework for the analysis of an entire class of gcdlike algorithms. Keywords: Averagecase Analysis of algorithms, BitComplexity, Euclidean Algorithms, Dynamical Systems, Ruelle operators, Generating Functions, Dirichlet Series, Tauberian Theorems. 1 Introduction Motivations. Euclid's algorithm was analysed first in the worst case in 1733 by de Lagny, then in the averagecase around 1969 independently by Heilbronn [12] and Dixon [6], and finally in distribution by Hensley [13] who proved in 1994 that the Eu...
Euclidean dynamics
 DISCRETE AND CONTINUOUS DYNAMICAL SYSTEMS
, 2006
"... We study a general class of Euclidean algorithms which compute the greatest common divisor [gcd], and we perform probabilistic analyses of their main parameters. We view an algorithm as a dynamical system restricted to rational inputs, and combine tools imported from dynamics, such as transfer ope ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
We study a general class of Euclidean algorithms which compute the greatest common divisor [gcd], and we perform probabilistic analyses of their main parameters. We view an algorithm as a dynamical system restricted to rational inputs, and combine tools imported from dynamics, such as transfer operators, with various tools of analytic combinatorics: generating functions, Dirichlet series, Tauberian theorems, Perron’s formula and quasipowers theorems. Such dynamical analyses can be used to perform the averagecase analysis of algorithms, but also (dynamical) analysis in distribution.
Tunstall Code, Khodak Variations, and random Walks
, 2008
"... A variabletofixed length encoder partitions the source string into variablelength phrases that belong to a given and fixed dictionary. Tunstall, and independently Khodak, designed variabletofixed length codes for memoryless sources that are optimal under certain constraints. In this paper, we s ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
A variabletofixed length encoder partitions the source string into variablelength phrases that belong to a given and fixed dictionary. Tunstall, and independently Khodak, designed variabletofixed length codes for memoryless sources that are optimal under certain constraints. In this paper, we study the Tunstall and Khodak codes using analytic information theory, i.e., the machinery from the analysis of algorithms literature. After proposing an algebraic characterization of the Tunstall and Khodak codes, we present new results on the variance and a central limit theorem for dictionary phrase lengths. This analysis also provides a new argument for obtaining asymptotic results about the mean dictionary phrase length and average redundancy rates.
Average redundancy for known sources: ubiquitous trees in source coding
 Proceedings, Fifth Colloquium on Mathematics and Computer Science (Blaubeuren, 2008), Discrete Math. Theor. Comput. Sci. Proc. AI
, 2008
"... Analytic information theory aims at studying problems of information theory using analytic techniques of computer science and combinatorics. Following Hadamard’s precept, these problems are tackled by complex analysis methods such as generating functions, Mellin transform, Fourier series, saddle poi ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Analytic information theory aims at studying problems of information theory using analytic techniques of computer science and combinatorics. Following Hadamard’s precept, these problems are tackled by complex analysis methods such as generating functions, Mellin transform, Fourier series, saddle point method, analytic poissonization and depoissonization, and singularity analysis. This approach lies at the crossroad of computer science and information theory. In this survey we concentrate on one facet of information theory (i.e., source coding better known as data compression), namely the redundancy rate problem. The redundancy rate problem determines by how much the actual code length exceeds the optimal code length. We further restrict our interest to the average redundancy for known sources, that is, when statistics of information sources are known. We present precise analyses of three types of lossless data compression schemes, namely fixedtovariable (FV) length codes, variabletofixed (VF) length codes, and variabletovariable (VV) length codes. In particular, we investigate average redundancy of Huffman, Tunstall, and Khodak codes. These codes have succinct representations as trees, either as coding or parsing trees, and we analyze here some of their parameters (e.g., the average path from the root to a leaf).
Existence of a Limiting Distribution for the Binary GCD Algorithm ∗
, 2005
"... In this article, we prove the existence and uniqueness of a certain distribution function on the unit interval. This distribution appears in Brent’s model of the analysis of the binary gcd algorithm. The existence and uniqueness of such a function has been conjectured by Richard Brent in his origina ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
In this article, we prove the existence and uniqueness of a certain distribution function on the unit interval. This distribution appears in Brent’s model of the analysis of the binary gcd algorithm. The existence and uniqueness of such a function has been conjectured by Richard Brent in his original paper [1]. Donald Knuth also supposes its existence in [5] where developments of its properties lead to very good estimates in relation with the algorithm. We settle here the question of existence, giving a basis to these results, and study the relationship between this limiting function and the binary Euclidean operator B2, proving rigorously that its derivative is a fixed point of B2.