Results 1  10
of
12
Size and Path length of Patricia Tries: Dynamical Sources Context.
, 2001
"... Digital trees, also known as tries, and Patricia tries are flexible data structures that occur in a variety of computer and communication algorithms including dynamic hashing, partial match retrieval, searching and sorting, conflict resolution algorithms for broadcast communication, data compression ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Digital trees, also known as tries, and Patricia tries are flexible data structures that occur in a variety of computer and communication algorithms including dynamic hashing, partial match retrieval, searching and sorting, conflict resolution algorithms for broadcast communication, data compression, and so forth. We consider here tries and Patricia tries built from $n$ words emitted by a probabilistic dynamical source. Such sources encompass classical and many more models of sources as memoryless sources and finite Markov chains. The probabilistic behavior of the main parameters, namely the size and path length, appears to be determined by some intrinsic characteristics of the source, namely the entropy and two other constants, themselves related in a natural way to spectral properties of specific transfer operators of Ruelle type. Keywords: Averagecase Analysis of datastructures, Information Theory, Trie, Mellin analysis, Dynamical systems, Ruelle operator, Functional Analysis.
Digital Trees and Memoryless Sources: from Arithmetics to Analysis
 21st International Meeting on Probabilistic, Combinatorial, and Asymptotic Methods in the Analysis of Algorithms (AofA’10), Discrete Math. Theor. Comput. Sci. Proc
, 2010
"... Digital trees, also known as “tries”, are fundamental to a number of algorithmic schemes, including radixbased searching and sorting, lossless text compression, dynamic hashing algorithms, communication protocols of the tree or stack type, distributed leader election, and so on. This extended abstr ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
Digital trees, also known as “tries”, are fundamental to a number of algorithmic schemes, including radixbased searching and sorting, lossless text compression, dynamic hashing algorithms, communication protocols of the tree or stack type, distributed leader election, and so on. This extended abstract develops the asymptotic form of expectations of the main parameters of interest, such as tree size and path length. The analysis is conducted under the simplest of all probabilistic models; namely, the memoryless source, under which letters that data items are comprised of are drawn independently from a fixed (finite) probability distribution. The precise asymptotic structure of the parameters’ expectations is shown to depend on fine singular properties in the complex plane of a ubiquitous Dirichlet series. Consequences include the characterization of a broad range of asymptotic regimes for error terms associated with trie parameters, as well as a classification that depends on specific arithmetic properties, especially irrationality measures, of the sources under consideration.
On Differences of Zeta Values
 in "Journal of Computational and Applied Mathematics
, 2008
"... ABSTRACT. Finite differences of values of the Riemann zeta function at the integers are explored. Such quantities, which occur as coefficients in Newton series representations, have surfaced in works of Bombieri–Lagarias, Ma´slanka, Coffey, BáezDuarte, Voros and others. We apply the theory of Nörlu ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
ABSTRACT. Finite differences of values of the Riemann zeta function at the integers are explored. Such quantities, which occur as coefficients in Newton series representations, have surfaced in works of Bombieri–Lagarias, Ma´slanka, Coffey, BáezDuarte, Voros and others. We apply the theory of NörlundRice integrals in conjunction with the saddlepoint method and derive precise asymptotic estimates. The method extends to Dirichlet Lfunctions and our estimates appear to be partly related to earlier investigations surrounding Li’s criterion for the Riemann hypothesis.
Lattice reduction in two dimensions: analyses under realistic probabilistic models, DMTCS’07
 150 Amdouni, I.; Minet, P.; Adjih, C., Node
, 2007
"... The Gaussian algorithm for lattice reduction in dimension 2 is precisely analysed under a class of realistic probabilistic models, which are of interest when applying the Gauss algorithm “inside ” the LLL algorithm. The proofs deal with the underlying dynamical systems and transfer operators. All th ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
The Gaussian algorithm for lattice reduction in dimension 2 is precisely analysed under a class of realistic probabilistic models, which are of interest when applying the Gauss algorithm “inside ” the LLL algorithm. The proofs deal with the underlying dynamical systems and transfer operators. All the main parameters are studied: execution parameters which describe the behaviour of the algorithm itself as well as output parameters, which describe the geometry of reduced bases.
Euclidean dynamics
 Discrete and Continuous Dynamical Systems
"... Abstract. We study a general class of Euclidean algorithms which compute the greatest common divisor [gcd], and we perform probabilistic analyses of their main parameters. We view an algorithm as a dynamical system restricted to rational inputs, and combine tools imported from dynamics, such as tran ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We study a general class of Euclidean algorithms which compute the greatest common divisor [gcd], and we perform probabilistic analyses of their main parameters. We view an algorithm as a dynamical system restricted to rational inputs, and combine tools imported from dynamics, such as transfer operators, with various tools of analytic combinatorics: generating functions, Dirichlet series, Tauberian theorems, Perron’s formula and quasipowers theorems. Such dynamical analyses can be used to perform the averagecase analysis of algorithms, but also (dynamical) analysis in distribution. 1. Introduction. Computing the Greatest Common Divisor [Gcd
Abstract Analytic Combinatorics— A Calculus of Discrete Structures
"... The efficiency of many discrete algorithms crucially depends on quantifying properties of large structured combinatorial configurations. We survey methods of analytic combinatorics that are simply based on the idea of associating numbers to atomic elements that compose combinatorial structures, then ..."
Abstract
 Add to MetaCart
(Show Context)
The efficiency of many discrete algorithms crucially depends on quantifying properties of large structured combinatorial configurations. We survey methods of analytic combinatorics that are simply based on the idea of associating numbers to atomic elements that compose combinatorial structures, then examining the geometry of the resulting functions. In this way, an operational calculus of discrete structures emerges. Applications to basic algorithms, data structures, and the theory of random discrete structures are outlined. 1 Algorithms and Random Structures A prime factor in choosing the best algorithm for a given computational task is efficiency with respect to the resources consumed, for instance, auxiliary storage, execution time, amount of communication needed. For a given algorithm A, such a complexity measure being fixed, what is of interest is the relation Size of the problem instance (n) − → Complexity of the algorithm (C), which serves to define the complexity function C(n) ≡ CA(n) of algorithm A. Precisely, this complexity function can be specified in several ways. (i) Worstcase analysis takes C(n) to be the maximum of C over all inputs of size n. This corresponds to a pessimistic scenario, one which is of relevance in critical systems and realtime computing. (ii) Averagecase analysis takes C(n) to be the expected value (average) of C over inputs of size n. The aim is to capture the “typical ” cost of a computational task observed when the algorithm is repeatedly applied to various kinds of data. (iii) Probabilistic analysis takes C(n) to be an indicator of the most likely values of C. Its more general aim is to obtain fine estimates on the probability distribution of C, beyond averagecase analysis.
Abstract Computation of a Class of Continued Fraction Constants
"... We describe a class of algorithms which compute in polynomial – time important constants related to the Euclidean Dynamical System. Our algorithms are based on a method which has been previously introduced by Daudé Flajolet and Vallée in [10] and further used in [13, 32]. However, the authors did no ..."
Abstract
 Add to MetaCart
We describe a class of algorithms which compute in polynomial – time important constants related to the Euclidean Dynamical System. Our algorithms are based on a method which has been previously introduced by Daudé Flajolet and Vallée in [10] and further used in [13, 32]. However, the authors did not prove the correctness of the algorithm and did not provide any complexity bound. Here, we describe a general framework where the DFV–method leads to a proven polynomial–time algorithm that computes ”spectral constants ” relative to a class of Dynamical Systems. These constants are closely related to eigenvalues of the transfer operator. Since it acts on an infinite–dimensional space, exact spectral computations are almost always impossible, and are replaced by (proven) numerical approximations. The transfer operator can be viewed as an infinite matrix M = (Mi,j)1≤i,j< ∞ which is the limit (in some precise sense) of the sequence of truncated matrices Mn: = (Mi,j)1≤i,j<n of order n where exact computations are possible. Using results of [1], we prove that each isolated eigenvalue λ of M is a limit of a sequence λn ∈ SpMn, with exponential speed. Then, coming back to the Euclidean Dynamical System, we compute (in polynomial time) three important constants which play a central rôle in the Euclidean algorithm: (i) the GaussKuzminWirsing constant related to the speed of convergence of the continued fraction algorithm to its limit density; (ii) the Hensley constant which occurs in the leading term of the variance of the number of steps of the Euclid algorithm; (iii) the Hausdorff dimension of the Cantor sets relative to constrained continued fraction expansions. 1
THE GAUSSKUZMINWIRSING OPERATOR
"... ABSTRACT. This paper presents a review of the GaussKuzminWirsing (GKW) operator. The GKW operator is the transfer operator of the Gauss map, and thus has connections to the theory of continued fractions – specifically, it is the shift operator for continued fractions. The operator appears to have ..."
Abstract
 Add to MetaCart
(Show Context)
ABSTRACT. This paper presents a review of the GaussKuzminWirsing (GKW) operator. The GKW operator is the transfer operator of the Gauss map, and thus has connections to the theory of continued fractions – specifically, it is the shift operator for continued fractions. The operator appears to have a reasonably smooth, wellbehaved structure, however, no closedform analytic solutions are known, and these are not easy to obtain. Eigenvalues and eigenfunctions can be obtained numerically, but little else is known in the mathematical literature. While this paper does attempt to be a review, it is incomplete; it is more of a diary of research results. Connections to the Minkowski Question Mark Function are probed. In particular, the Question Mark is used to define a transfer operator which is conjugate to the GKW. This conjugate operator is solvable, and can be shown to have fractal eigenfunctions. However, the spectrum of this operator is not at all the same as that of the GKW. This is because the Jacobian of the transformation relating the two is given by (? ′ ◦? −1)(x) , which is wellknown as the prototypical “multifractal measure”. Nonetheless, conjugacy allows the eigenfunctions of the one to be used to construct eigenfunctions of the other; in this sense,