Results 11  20
of
47
CryptoComputing with rationals
, 2002
"... In this paper we describe a method to compute with encrypted rational numbers. ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
In this paper we describe a method to compute with encrypted rational numbers.
Analysis of the GallantLambertVanstone Method based on Efficient Endomorphisms: Elliptic and Hyperelliptic Curves
, 2002
"... In this work we analyse the GLV method of Gallant, Lambert and Vanstone (CRYPTO 2001) which uses a fast endomorphism # with minimal polynomial X + rX + s to compute any multiple kP of a point P of order n lying on an elliptic curve. First we ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
In this work we analyse the GLV method of Gallant, Lambert and Vanstone (CRYPTO 2001) which uses a fast endomorphism # with minimal polynomial X + rX + s to compute any multiple kP of a point P of order n lying on an elliptic curve. First we
The Euclide algorithm in dimension n
, 1996
"... We present in this paper an algorithm which is a natural extension in dimension n of the Euclide algorithm computing the greatest common divisor of two integers. Let H be a subgroup of Z^n, given by a set of generators... ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
We present in this paper an algorithm which is a natural extension in dimension n of the Euclide algorithm computing the greatest common divisor of two integers. Let H be a subgroup of Z^n, given by a set of generators...
Worstcase complexity of the optimal LLL algorithm
 In Proceedings of LATIN’2000  Punta del Este. LNCS 1776
"... . In this paper, we consider the open problem of the complexity of the LLL algorithm in the case when the approximation parameter t of the algorithm has its extreme value 1. This case is of interest because the output is then the strongest Lovaszreduced basis. Experiments reported by Lagarias and ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
. In this paper, we consider the open problem of the complexity of the LLL algorithm in the case when the approximation parameter t of the algorithm has its extreme value 1. This case is of interest because the output is then the strongest Lovaszreduced basis. Experiments reported by Lagarias and Odlyzko [LO83] seem to show that the algorithm remain polynomial in average. However no bound better than a naive exponential order one is established for the worst case complexity of the optimal LLL algorithm, even for fixed small dimension (higher than 2). Here we prove that, for any fixed dimension n, the number of iterations of the LLL algorithm is linear with respect to the size of the input. It is easy to deduce from [Val91] that the linear order is optimal. Moreover in 3 dimensions, we give a tight bound for the maximum number of iterations and we characterize precisely the output basis. Our bound also improves the known one for the usual (nonoptimal) LLL algorithm. 1 Introductio...
Lattice reduction in two dimensions: analyses under realistic probabilistic models
, 2003
"... The Gaussian algorithm for lattice reduction in dimension 2 is precisely analysed under a class of realistic probabilistic models, which are of interest when applying the Gauss algorithm “inside ” the LLL algorithm. The proofs deal with the underlying dynamical systems and transfer operators. All th ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
The Gaussian algorithm for lattice reduction in dimension 2 is precisely analysed under a class of realistic probabilistic models, which are of interest when applying the Gauss algorithm “inside ” the LLL algorithm. The proofs deal with the underlying dynamical systems and transfer operators. All the main parameters are studied: execution parameters which describe the behaviour of the algorithm itself as well as output parameters, which describe the geometry of reduced bases.
A Unifying Framework for the Analysis of a Class of Euclidean Algorithms
 the proceedings of LATIN'2000, LNCS
, 2000
"... . We develop a general framework for the analysis of algorithms of a broad Euclidean type. The averagecase complexity of an algorithm is seen to be related to the analytic behaviour in the complex plane of the set of elementary transformations determined by the algorithms. The methods rely on p ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
. We develop a general framework for the analysis of algorithms of a broad Euclidean type. The averagecase complexity of an algorithm is seen to be related to the analytic behaviour in the complex plane of the set of elementary transformations determined by the algorithms. The methods rely on properties of transfer operators suitably adapted from dynamical systems theory. As a consequence, we obtain precise averagecase analyses of four algorithms for evaluating the Jacobi symbol of computational number theory fame, thereby solving conjectures of Bach and Shallit. These methods provide a unifying framework for the analysis of an entire class of gcdlike algorithms together with new results regarding the probable behaviour of their cost functions. 1 Introduction Euclid's algorithm, discovered as early as 300BC, was analysed first in the worst case in 1733 by de Lagny, then in the averagecase around 1969 independently by Heilbronn [8] and Dixon [5], and finally in distribut...
On the StackSize of General Tries
, 2000
"... Digital trees or tries are a general purpose flexible data structure that implements dictionaries built on words. The present paper is focussed on the averagecase analysis of an important parameter of this treestructure, i.e., the stacksize. The stacksize of a tree is the memory needed by a stor ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Digital trees or tries are a general purpose flexible data structure that implements dictionaries built on words. The present paper is focussed on the averagecase analysis of an important parameter of this treestructure, i.e., the stacksize. The stacksize of a tree is the memory needed by a storageoptimal preorder traversal. The analysis is carried out under a general model in which words are produced by a source (in the informationtheoretic sense) that emits symbols. Under some natural assumptions that encompass all commonly used data models (and more), we obtain a precise averagecase and probabilistic analysis of stacksize. Furthermore, we study the dependency between the stacksize and the ordering on symbols in the alphabet: we establish that, when the source emits independent symbols, the optimal ordering arises when the most probable symbol is the last one in this order.
Euclidean dynamics
 DISCRETE AND CONTINUOUS DYNAMICAL SYSTEMS
, 2006
"... We study a general class of Euclidean algorithms which compute the greatest common divisor [gcd], and we perform probabilistic analyses of their main parameters. We view an algorithm as a dynamical system restricted to rational inputs, and combine tools imported from dynamics, such as transfer ope ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
We study a general class of Euclidean algorithms which compute the greatest common divisor [gcd], and we perform probabilistic analyses of their main parameters. We view an algorithm as a dynamical system restricted to rational inputs, and combine tools imported from dynamics, such as transfer operators, with various tools of analytic combinatorics: generating functions, Dirichlet series, Tauberian theorems, Perron’s formula and quasipowers theorems. Such dynamical analyses can be used to perform the averagecase analysis of algorithms, but also (dynamical) analysis in distribution.
Computation of a Class of Continued Fraction Constants
"... We describe a class of algorithms which compute in polynomial – time important constants related to the Euclidean Dynamical System. Our algorithms are based on a method which has been previously introduced by Daudé Flajolet and Vallée in [10] and further used in [13, 32]. However, the authors did no ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We describe a class of algorithms which compute in polynomial – time important constants related to the Euclidean Dynamical System. Our algorithms are based on a method which has been previously introduced by Daudé Flajolet and Vallée in [10] and further used in [13, 32]. However, the authors did not prove the correctness of the algorithm and did not provide any complexity bound. Here, we describe a general framework where the DFV–method leads to a proven polynomial–time algorithm that computes ”spectral constants ” relative to a class of Dynamical Systems. These constants are closely related to eigenvalues of the transfer operator. Since it acts on an infinite–dimensional space, exact spectral computations are almost always impossible, and are replaced by (proven) numerical approximations. The transfer operator can be viewed as an infinite matrix M = (Mi,j)1≤i,j< ∞ which is the limit (in some precise sense) of the sequence of truncated matrices Mn: = (Mi,j)1≤i,j<n of order n where exact computations are possible. Using results of [1], we prove that each isolated eigenvalue λ of M is a limit of a sequence λn ∈ SpMn, with exponential speed. Then, coming back to the Euclidean Dynamical System, we compute (in polynomial time) three important constants which play a central rôle in the Euclidean algorithm: (i) the GaussKuzminWirsing constant related to the speed of convergence of the continued fraction algorithm to its limit density; (ii) the Hensley constant which occurs in the leading term of the variance of the number of steps of the Euclid algorithm; (iii) the Hausdorff dimension of the Cantor sets relative to constrained continued fraction expansions.