Results 1  10
of
33
Matrix Transformation is Complete for the Average Case
 SIAM JOURNAL ON COMPUTING
, 1995
"... In the theory of worst case complexity, NP completeness is used to establish that, for all practical purposes, the given NP problem is not decidable in polynomial time. In the theory of average case complexity, average case completeness is supposed to play the role of NP completeness. However, the a ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
In the theory of worst case complexity, NP completeness is used to establish that, for all practical purposes, the given NP problem is not decidable in polynomial time. In the theory of average case complexity, average case completeness is supposed to play the role of NP completeness. However, the average case reduction theory is still at an early stage, and only a few average case complete problems are known. We present the first algebraic problem complete for the average case under a natural probability distribution. The problem is this: Given a unimodular matrix X of integers, a set S of linear transformations of such unimodular matrices and a natural number n, decide if there is a product of n (not necessarily different) members of S that takes X to the identity matrix.
Dynamical Analysis of a Class of Euclidean Algorithms
"... We develop a general framework for the analysis of algorithms of a broad Euclidean type. The averagecase complexity of an algorithm is seen to be related to the analytic behaviour in the complex plane of the set of elementary transformations determined by the algorithm. The methods rely on properti ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
(Show Context)
We develop a general framework for the analysis of algorithms of a broad Euclidean type. The averagecase complexity of an algorithm is seen to be related to the analytic behaviour in the complex plane of the set of elementary transformations determined by the algorithm. The methods rely on properties of transfer operators suitably adapted from dynamical systems theory. As a consequence, we obtain precise averagecase analyses of algorithms for evaluating the Jacobi symbol of computational number theory fame, thereby solving conjectures of Bach and Shallit. These methods also provide a unifying framework for the analysis of an entire class of gcdlike algorithms together with new results regarding the probable behaviour of their cost functions. 1
Average BitComplexity of Euclidean Algorithms
 Proceedings ICALP’00, Lecture Notes Comp. Science 1853, 373–387
, 2000
"... We obtain new results regarding the precise average bitcomplexity of five algorithms of a broad Euclidean type. We develop a general framework for analysis of algorithms, where the averagecase complexity of an algorithm is seen to be related to the analytic behaviour in the complex plane of the set ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
(Show Context)
We obtain new results regarding the precise average bitcomplexity of five algorithms of a broad Euclidean type. We develop a general framework for analysis of algorithms, where the averagecase complexity of an algorithm is seen to be related to the analytic behaviour in the complex plane of the set of elementary transformations determined by the algorithms. The methods rely on properties of transfer operators suitably adapted from dynamical systems theory and provide a unifying framework for the analysis of an entire class of gcdlike algorithms. Keywords: Averagecase Analysis of algorithms, BitComplexity, Euclidean Algorithms, Dynamical Systems, Ruelle operators, Generating Functions, Dirichlet Series, Tauberian Theorems. 1 Introduction Motivations. Euclid's algorithm was analysed first in the worst case in 1733 by de Lagny, then in the averagecase around 1969 independently by Heilbronn [12] and Dixon [6], and finally in distribution by Hensley [13] who proved in 1994 that the Eu...
Digits and Continuants in Euclidean Algorithms. Ergodic versus Tauberian Theorems
, 2000
"... We obtain new results regarding the precise average case analysis of the main quantities that intervene in algorithms of a broad Euclidean type. We develop a general framework for the analysis of such algorithms, where the averagecase complexity of an algorithm is related to the analytic behaviou ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
We obtain new results regarding the precise average case analysis of the main quantities that intervene in algorithms of a broad Euclidean type. We develop a general framework for the analysis of such algorithms, where the averagecase complexity of an algorithm is related to the analytic behaviour in the complex plane of the set of elementary transformations determined by the algorithms. The methods rely on properties of transfer operators suitably adapted from dynamical systems theory and provide a unifying framework for the analysis of the main parameters digits and continuants that intervene in an entire class of gcdlike algorithms. We operate a general transfer from the continuous case (Continued Fraction Algorithms) to the discrete case (Euclidean Algorithms), where Ergodic Theorems are replaced by Tauberian Theorems.
Towards Practical Deterministic WriteAll Algorithms
 IN PROC., 13TH ACM SYMP. ON PARALLEL ALGORITHMS AND ARCHITECTURES, 2001
, 2001
"... The problem of performing t tasks on n asynchronous or undependable processors is a basic problem in parallel and distributed computing. We consider an abstraction of this problem called the WriteAl l problemusing n processors write 1's into all locations of an array of size t. The most e# ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
The problem of performing t tasks on n asynchronous or undependable processors is a basic problem in parallel and distributed computing. We consider an abstraction of this problem called the WriteAl l problemusing n processors write 1's into all locations of an array of size t. The most e#cient known deterministic asynchronous algorithms for this problem are due to Anderson and Woll. The first class of algorithms has work complexity of O(t ), for n t and any #>0, and they are the best known for the full range of processors (n = t). To schedule the work of the processors, the algorithms use lists of q permutations on [q](q n) that have certain combinatorial properties. Instantiating such an algorithm for a specific # either requires substantial preprocessing (exponential in 1/# )to find the requisite permutations, or imposes a prohibitive constant (exponential in 1/# ) hidden by the asymptotic analysis. The second class deals with the specific case of t = n 2, and these algorithms have work complexity of O(t log t). They also use lists of permutations with the same combinatorial properties. However instantiating these algorithms requires exponential in n preprocessing to find the permutations. To alleviate this costly instantiation Kanellakis and Shvartsman proposed a simple way of computing the permutations. They conjectured that their construction has the desired properties but they provided no analysis. In this paper
New point addition formulae for ECC applications
 WAIFI 2007. LNCS
, 2007
"... Abstract. In this paper we propose a new approach to point scalar multiplication on elliptic curves defined over fields of characteristic greater than 3. It is based on new point addition formulae that suit very well to exponentiation algorithms based on Euclidean addition chains. However finding s ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we propose a new approach to point scalar multiplication on elliptic curves defined over fields of characteristic greater than 3. It is based on new point addition formulae that suit very well to exponentiation algorithms based on Euclidean addition chains. However finding small chains remains a very difficult problem, so we also develop a specific exponentiation algorithm, based on Zeckendorf representation (i.e. representing the scalar k using Fibonacci numbers instead of powers of 2), which takes advantage of our formulae.
Fast and Secure Elliptic Curve Scalar Multiplication over Prime Fields Using Special Addition Chains
, 2006
"... In this paper, we propose a new fast and secure point multiplication algorithm. It is based on a particular kind of addition chains involving only additions (no doubling), providing a natural protection against side channel attacks. Moreover, we propose new addition formulae that take into account t ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
In this paper, we propose a new fast and secure point multiplication algorithm. It is based on a particular kind of addition chains involving only additions (no doubling), providing a natural protection against side channel attacks. Moreover, we propose new addition formulae that take into account the specific structure of those chains making point multiplication very e#cient.
Continued fractions from Euclid to the present day
, 2000
"... this paper to indicate how continued fractions are relevant to ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
this paper to indicate how continued fractions are relevant to
Mean Values of Dedekind Sums
, 1996
"... this paper, we are concerned with large values of Dedekind sums. We measure these by considering the 2mth moments of Dedekind sums averaged over reduced fractions in [0; 1] with denominator k. For ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
this paper, we are concerned with large values of Dedekind sums. We measure these by considering the 2mth moments of Dedekind sums averaged over reduced fractions in [0; 1] with denominator k. For