Results 1  10
of
31
Dynamical Sources in Information Theory: A General Analysis of Trie Structures
 ALGORITHMICA
, 1999
"... Digital trees, also known as tries, are a general purpose flexible data structure that implements dictionaries built on sets of words. An analysis is given of three major representations of tries in the form of arraytries, list tries, and bsttries ("ternary search tries"). The size and the sear ..."
Abstract

Cited by 50 (7 self)
 Add to MetaCart
Digital trees, also known as tries, are a general purpose flexible data structure that implements dictionaries built on sets of words. An analysis is given of three major representations of tries in the form of arraytries, list tries, and bsttries ("ternary search tries"). The size and the search costs of the corresponding representations are analysed precisely in the average case, while a complete distributional analysis of height of tries is given. The unifying data model used is that of dynamical sources and it encompasses classical models like those of memoryless sources with independent symbols, of finite Markovchains, and of nonuniform densities. The probabilistic behaviour of the main parameters, namely size, path length, or height, appears to be determined by two intrinsic characteristics of the source: the entropy and the probability of letter coincidence. These characteristics are themselves related in a natural way to spectral properties of specific transfer operators of the Ruelle type.
Euler Sums and Contour Integral Representations
, 1998
"... This paper develops an approach to the evaluation of Euler sums that involve harmonic numbers, either linearly or nonlinearly. We give explicit formulæ for several classes of Euler sums in terms of Riemann zeta values. The approach is based on simple contour integral representations and residue comp ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
This paper develops an approach to the evaluation of Euler sums that involve harmonic numbers, either linearly or nonlinearly. We give explicit formulæ for several classes of Euler sums in terms of Riemann zeta values. The approach is based on simple contour integral representations and residue computations.
Continued Fraction Algorithms, Functional Operators, and Structure Constants
, 1996
"... Continued fractions lie at the heart of a number of classical algorithms like Euclid's greatest common divisor algorithm or the lattice reduction algorithm of Gauss that constitutes a 2dimensional generalization. This paper surveys the main properties of functional operators,  transfer operat ..."
Abstract

Cited by 28 (4 self)
 Add to MetaCart
Continued fractions lie at the heart of a number of classical algorithms like Euclid's greatest common divisor algorithm or the lattice reduction algorithm of Gauss that constitutes a 2dimensional generalization. This paper surveys the main properties of functional operators,  transfer operators  due to Ruelle and Mayer (also following Lévy, Kuzmin, Wirsing, Hensley, and others) that describe precisely the dynamics of the continued fraction transformation. Spectral characteristics of transfer operators are shown to have many consequences, like the normal law for logarithms of continuants associated to the basic continued fraction algorithm and a purely analytic estimation of the average number of steps of the Euclidean algorithm. Transfer operators also lead to a complete analysis of the "Hakmem" algorithm for comparing two rational numbers via partial continued fraction expansions and of the "digital tree" algorithm for completely sorting n real numbers by means of ...
Dynamical Sources in Information Theory: Fundamental intervals and Word Prefixes.
, 1998
"... A quite general model of source that comes from dynamical systems theory is introduced. Within this model, some important problems about prefixes that intervene in algorithmic information theory contexts are analysed. The main tool is a new object, the generalized Ruelle operator, which can be viewe ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
A quite general model of source that comes from dynamical systems theory is introduced. Within this model, some important problems about prefixes that intervene in algorithmic information theory contexts are analysed. The main tool is a new object, the generalized Ruelle operator, which can be viewed as a "generating" operator. Its dominant spectral objects are linked with important parameters of the source such as the entropy, and play a central role in all the results. 1 Introduction. In information theory contexts, data items are (infinite) words that are produced by a common mechanism, called a source. Realistic sources are often complex objects. We work here inside a quite general framework of sources related to dynamical systems theory which goes beyond the cases of memoryless and Markov sources. This model can describe nonmarkovian processes, where the dependency on past history is unbounded, and as such, they attain a high level of generality. A probabilistic dynamical source ...
Dynamical Analysis of a Class of Euclidean Algorithms
"... We develop a general framework for the analysis of algorithms of a broad Euclidean type. The averagecase complexity of an algorithm is seen to be related to the analytic behaviour in the complex plane of the set of elementary transformations determined by the algorithm. The methods rely on properti ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
We develop a general framework for the analysis of algorithms of a broad Euclidean type. The averagecase complexity of an algorithm is seen to be related to the analytic behaviour in the complex plane of the set of elementary transformations determined by the algorithm. The methods rely on properties of transfer operators suitably adapted from dynamical systems theory. As a consequence, we obtain precise averagecase analyses of algorithms for evaluating the Jacobi symbol of computational number theory fame, thereby solving conjectures of Bach and Shallit. These methods also provide a unifying framework for the analysis of an entire class of gcdlike algorithms together with new results regarding the probable behaviour of their cost functions. 1
Average BitComplexity of Euclidean Algorithms
 Proceedings ICALP’00, Lecture Notes Comp. Science 1853, 373–387
, 2000
"... We obtain new results regarding the precise average bitcomplexity of five algorithms of a broad Euclidean type. We develop a general framework for analysis of algorithms, where the averagecase complexity of an algorithm is seen to be related to the analytic behaviour in the complex plane of the set ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
We obtain new results regarding the precise average bitcomplexity of five algorithms of a broad Euclidean type. We develop a general framework for analysis of algorithms, where the averagecase complexity of an algorithm is seen to be related to the analytic behaviour in the complex plane of the set of elementary transformations determined by the algorithms. The methods rely on properties of transfer operators suitably adapted from dynamical systems theory and provide a unifying framework for the analysis of an entire class of gcdlike algorithms. Keywords: Averagecase Analysis of algorithms, BitComplexity, Euclidean Algorithms, Dynamical Systems, Ruelle operators, Generating Functions, Dirichlet Series, Tauberian Theorems. 1 Introduction Motivations. Euclid's algorithm was analysed first in the worst case in 1733 by de Lagny, then in the averagecase around 1969 independently by Heilbronn [12] and Dixon [6], and finally in distribution by Hensley [13] who proved in 1994 that the Eu...
Digits and Continuants in Euclidean Algorithms. Ergodic versus Tauberian Theorems
, 2000
"... We obtain new results regarding the precise average case analysis of the main quantities that intervene in algorithms of a broad Euclidean type. We develop a general framework for the analysis of such algorithms, where the averagecase complexity of an algorithm is related to the analytic behaviou ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
We obtain new results regarding the precise average case analysis of the main quantities that intervene in algorithms of a broad Euclidean type. We develop a general framework for the analysis of such algorithms, where the averagecase complexity of an algorithm is related to the analytic behaviour in the complex plane of the set of elementary transformations determined by the algorithms. The methods rely on properties of transfer operators suitably adapted from dynamical systems theory and provide a unifying framework for the analysis of the main parameters digits and continuants that intervene in an entire class of gcdlike algorithms. We operate a general transfer from the continuous case (Continued Fraction Algorithms) to the discrete case (Euclidean Algorithms), where Ergodic Theorems are replaced by Tauberian Theorems.
Multiple Zeta Values At NonPositive Integers
, 1999
"... Values of EulerZagier's multiple zeta function at nonpositive integers are studied, especially at (0; 0; : : : ; n) and ( n; 0; : : : ; 0). Further we prove a symmetric formula among values at nonpositive integers. ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Values of EulerZagier's multiple zeta function at nonpositive integers are studied, especially at (0; 0; : : : ; n) and ( n; 0; : : : ; 0). Further we prove a symmetric formula among values at nonpositive integers.
Continued Fractions, Comparison Algorithms, and Fine Structure Constants
, 2000
"... There are known algorithms based on continued fractions for comparing fractions and for determining the sign of 2x2 determinants. The analysis of such extremely simple algorithms leads to an incursion into a surprising variety of domains. We take the reader through a light tour of dynamical systems ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
There are known algorithms based on continued fractions for comparing fractions and for determining the sign of 2x2 determinants. The analysis of such extremely simple algorithms leads to an incursion into a surprising variety of domains. We take the reader through a light tour of dynamical systems (symbolic dynamics), number theory (continued fractions), special functions (multiple zeta values), functional analysis (transfer operators), numerical analysis (series acceleration), and complex analysis (the Riemann hypothesis). These domains all eventually contribute to a detailed characterization of the complexity of comparison and sorting algorithms, either on average or in probability.
Analysis of the GallantLambertVanstone Method based on Efficient Endomorphisms: Elliptic and Hyperelliptic Curves
, 2002
"... In this work we analyse the GLV method of Gallant, Lambert and Vanstone (CRYPTO 2001) which uses a fast endomorphism # with minimal polynomial X + rX + s to compute any multiple kP of a point P of order n lying on an elliptic curve. First we ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
In this work we analyse the GLV method of Gallant, Lambert and Vanstone (CRYPTO 2001) which uses a fast endomorphism # with minimal polynomial X + rX + s to compute any multiple kP of a point P of order n lying on an elliptic curve. First we