Results 1  10
of
42
Dynamical Sources in Information Theory: A General Analysis of Trie Structures
 ALGORITHMICA
, 1999
"... Digital trees, also known as tries, are a general purpose flexible data structure that implements dictionaries built on sets of words. An analysis is given of three major representations of tries in the form of arraytries, list tries, and bsttries ("ternary search tries"). The size and the sear ..."
Abstract

Cited by 50 (7 self)
 Add to MetaCart
Digital trees, also known as tries, are a general purpose flexible data structure that implements dictionaries built on sets of words. An analysis is given of three major representations of tries in the form of arraytries, list tries, and bsttries ("ternary search tries"). The size and the search costs of the corresponding representations are analysed precisely in the average case, while a complete distributional analysis of height of tries is given. The unifying data model used is that of dynamical sources and it encompasses classical models like those of memoryless sources with independent symbols, of finite Markovchains, and of nonuniform densities. The probabilistic behaviour of the main parameters, namely size, path length, or height, appears to be determined by two intrinsic characteristics of the source: the entropy and the probability of letter coincidence. These characteristics are themselves related in a natural way to spectral properties of specific transfer operators of the Ruelle type.
Continued Fraction Algorithms, Functional Operators, and Structure Constants
, 1996
"... Continued fractions lie at the heart of a number of classical algorithms like Euclid's greatest common divisor algorithm or the lattice reduction algorithm of Gauss that constitutes a 2dimensional generalization. This paper surveys the main properties of functional operators,  transfer operat ..."
Abstract

Cited by 28 (4 self)
 Add to MetaCart
Continued fractions lie at the heart of a number of classical algorithms like Euclid's greatest common divisor algorithm or the lattice reduction algorithm of Gauss that constitutes a 2dimensional generalization. This paper surveys the main properties of functional operators,  transfer operators  due to Ruelle and Mayer (also following Lévy, Kuzmin, Wirsing, Hensley, and others) that describe precisely the dynamics of the continued fraction transformation. Spectral characteristics of transfer operators are shown to have many consequences, like the normal law for logarithms of continuants associated to the basic continued fraction algorithm and a purely analytic estimation of the average number of steps of the Euclidean algorithm. Transfer operators also lead to a complete analysis of the "Hakmem" algorithm for comparing two rational numbers via partial continued fraction expansions and of the "digital tree" algorithm for completely sorting n real numbers by means of ...
Dynamical Sources in Information Theory: Fundamental intervals and Word Prefixes.
, 1998
"... A quite general model of source that comes from dynamical systems theory is introduced. Within this model, some important problems about prefixes that intervene in algorithmic information theory contexts are analysed. The main tool is a new object, the generalized Ruelle operator, which can be viewe ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
A quite general model of source that comes from dynamical systems theory is introduced. Within this model, some important problems about prefixes that intervene in algorithmic information theory contexts are analysed. The main tool is a new object, the generalized Ruelle operator, which can be viewed as a "generating" operator. Its dominant spectral objects are linked with important parameters of the source such as the entropy, and play a central role in all the results. 1 Introduction. In information theory contexts, data items are (infinite) words that are produced by a common mechanism, called a source. Realistic sources are often complex objects. We work here inside a quite general framework of sources related to dynamical systems theory which goes beyond the cases of memoryless and Markov sources. This model can describe nonmarkovian processes, where the dependency on past history is unbounded, and as such, they attain a high level of generality. A probabilistic dynamical source ...
Isometries, shifts, Cuntz algebras and multiresolution wavelet analysis of scale N
, 1996
"... In this paper we show how wavelets originating from multiresolution analysis of scale N give rise to certain representations of the Cuntz algebras ON, and conversely how the wavelets can be recovered from these representations. The representations are given on the Hilbert space L 2 (T) by (Siξ)(z) ..."
Abstract

Cited by 26 (13 self)
 Add to MetaCart
In this paper we show how wavelets originating from multiresolution analysis of scale N give rise to certain representations of the Cuntz algebras ON, and conversely how the wavelets can be recovered from these representations. The representations are given on the Hilbert space L 2 (T) by (Siξ)(z) = mi (z)ξ ( z N). We characterize the Wold decomposition of such operators. If the operators come from wavelets they are shifts, and this can be used to realize the representation on a certain Hardy space over L 2 (T). This is used to compare the usual scale2 theory of wavelets with the scaleN theory. Also some other representations of ON of the above form called diagonal representations are characterized and classified up to unitary equivalence by a homological invariant.
Euclidean algorithms are Gaussian
, 2003
"... Abstract. We prove a Central Limit Theorem for a general class of costparameters associated to the three standard Euclidean algorithms, with optimal speed of convergence, and error terms for the mean and variance. For the most basic parameter of the algorithms, the number of steps, we go further an ..."
Abstract

Cited by 22 (10 self)
 Add to MetaCart
Abstract. We prove a Central Limit Theorem for a general class of costparameters associated to the three standard Euclidean algorithms, with optimal speed of convergence, and error terms for the mean and variance. For the most basic parameter of the algorithms, the number of steps, we go further and prove a Local Limit Theorem (LLT), with speed of convergence O((log N) −1/4+ǫ). This extends and improves the LLT obtained by Hensley [27] in the case of the standard Euclidean algorithm. We use a “dynamical analysis ” methodology, viewing an algorithm as a dynamical system (restricted to rational inputs), and combining tools imported from dynamics, such as the crucial transfer operators, with various other techniques: Dirichlet series, Perron’s formula, quasipowers theorems, the saddle point method. Dynamical analysis had previously been used to perform averagecase analysis of algorithms. For the present (dynamical) analysis in distribution, we require precise estimates on the transfer operators, when a parameter varies along vertical lines in the complex plane. Such estimates build on results obtained only recently by Dolgopyat in the context of continuoustime dynamics [20]. 1.
Dynamical Analysis of a Class of Euclidean Algorithms
"... We develop a general framework for the analysis of algorithms of a broad Euclidean type. The averagecase complexity of an algorithm is seen to be related to the analytic behaviour in the complex plane of the set of elementary transformations determined by the algorithm. The methods rely on properti ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
We develop a general framework for the analysis of algorithms of a broad Euclidean type. The averagecase complexity of an algorithm is seen to be related to the analytic behaviour in the complex plane of the set of elementary transformations determined by the algorithm. The methods rely on properties of transfer operators suitably adapted from dynamical systems theory. As a consequence, we obtain precise averagecase analyses of algorithms for evaluating the Jacobi symbol of computational number theory fame, thereby solving conjectures of Bach and Shallit. These methods also provide a unifying framework for the analysis of an entire class of gcdlike algorithms together with new results regarding the probable behaviour of their cost functions. 1
Average BitComplexity of Euclidean Algorithms
 Proceedings ICALP’00, Lecture Notes Comp. Science 1853, 373–387
, 2000
"... We obtain new results regarding the precise average bitcomplexity of five algorithms of a broad Euclidean type. We develop a general framework for analysis of algorithms, where the averagecase complexity of an algorithm is seen to be related to the analytic behaviour in the complex plane of the set ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
We obtain new results regarding the precise average bitcomplexity of five algorithms of a broad Euclidean type. We develop a general framework for analysis of algorithms, where the averagecase complexity of an algorithm is seen to be related to the analytic behaviour in the complex plane of the set of elementary transformations determined by the algorithms. The methods rely on properties of transfer operators suitably adapted from dynamical systems theory and provide a unifying framework for the analysis of an entire class of gcdlike algorithms. Keywords: Averagecase Analysis of algorithms, BitComplexity, Euclidean Algorithms, Dynamical Systems, Ruelle operators, Generating Functions, Dirichlet Series, Tauberian Theorems. 1 Introduction Motivations. Euclid's algorithm was analysed first in the worst case in 1733 by de Lagny, then in the averagecase around 1969 independently by Heilbronn [12] and Dixon [6], and finally in distribution by Hensley [13] who proved in 1994 that the Eu...
Digits and Continuants in Euclidean Algorithms. Ergodic versus Tauberian Theorems
, 2000
"... We obtain new results regarding the precise average case analysis of the main quantities that intervene in algorithms of a broad Euclidean type. We develop a general framework for the analysis of such algorithms, where the averagecase complexity of an algorithm is related to the analytic behaviou ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
We obtain new results regarding the precise average case analysis of the main quantities that intervene in algorithms of a broad Euclidean type. We develop a general framework for the analysis of such algorithms, where the averagecase complexity of an algorithm is related to the analytic behaviour in the complex plane of the set of elementary transformations determined by the algorithms. The methods rely on properties of transfer operators suitably adapted from dynamical systems theory and provide a unifying framework for the analysis of the main parameters digits and continuants that intervene in an entire class of gcdlike algorithms. We operate a general transfer from the continuous case (Continued Fraction Algorithms) to the discrete case (Euclidean Algorithms), where Ergodic Theorems are replaced by Tauberian Theorems.
On the susceptibility function of piecewise expanding interval maps
 Comm. Math. Phys
"... n=0 n X(y)ρ0(y) ∂ ∂y ϕ(fn (y)) dy associated to the perturbation ft = f + tX of a piecewise expanding interval map f, and to an observable ϕ. The analysis is based on a spectral description of transfer operators. It gives in particular sufficient conditions on f, X, and ϕ which guarantee that Ψ(z) i ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
n=0 n X(y)ρ0(y) ∂ ∂y ϕ(fn (y)) dy associated to the perturbation ft = f + tX of a piecewise expanding interval map f, and to an observable ϕ. The analysis is based on a spectral description of transfer operators. It gives in particular sufficient conditions on f, X, and ϕ which guarantee that Ψ(z) is holomorphic in a disc of larger than one. Although R Ψ(1) is the formal derivative (at t = 0) of the average R(t) = ϕρt dx of ϕ with respect to the SRB measure of ft, we present examples of f, X, and ϕ satisfying our conditions so that R(t) is not Lipschitz at 0. that the set {x ∈ M  limn→ ∞ 1 n 1. Introduction and
Measures in wavelet decompositions
 Advances in Applied Mathematics 34
, 2005
"... In applications, choices of orthonormal bases in Hilbert space H may come about from the simultaneous diagonalization of some specific abelian algebra of operators. This is the approach of quantum theory as suggested by John von Neumann; but as it turns out, much more recent constructions of bases i ..."
Abstract

Cited by 11 (9 self)
 Add to MetaCart
In applications, choices of orthonormal bases in Hilbert space H may come about from the simultaneous diagonalization of some specific abelian algebra of operators. This is the approach of quantum theory as suggested by John von Neumann; but as it turns out, much more recent constructions of bases in wavelet theory, and in dynamical systems, also fit into this scheme. However, in these modern applications, the basis typically comes first, and the abelian algebra might not even be made explicit. It was noticed recently that there is a certain finite set of noncommuting operators Fi, first introduced by engineers in signal processing, which helps to clarify this connection, and at the same time throws light on decomposition possibilities for wavelet packets used in pyramid algorithms. There are three interrelated components to this: an orthonormal basis, an abelian algebra, and a projectionvalued measure. While the operators Fi were originally intended for quadrature mirror filters of signals, recent papers have shown that they are ubiquitous in a variety of modern wavelet constructions, and in particular in the selection of wavelet packets