Results 21  30
of
112
Some babystep giantstep algorithms for the low hamming weight discrete logarithm problem
 Mathematics of Computation
"... Abstract. In this paper, we present several babystep giantstep algorithms for the low hamming weight discrete logarithm problem. In this version of the discrete log problem, we are required to find a discrete logarithm in a finite group of order approximately 2m, given that the unknown logarithm h ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
Abstract. In this paper, we present several babystep giantstep algorithms for the low hamming weight discrete logarithm problem. In this version of the discrete log problem, we are required to find a discrete logarithm in a finite group of order approximately 2m, given that the unknown logarithm has a specified number of 1’s, say t, in its binary representation. Heiman and Odlyzko presented the first algorithms for this problem. Unpublished improvements � � � � by Coppersmith include a deterministic algorithm with � complexity m/2 √t � �� m/2 O m, and a Las Vegas algorithm with complexity O t/2 t/2 We perform an averagecase analysis of Coppersmith’s deterministic algorithm. The averagecase complexity achieves only a constant factor speedup
A Unified Approach to Dynamic Point Location, Ray Shooting, and Shortest Paths in Planar Maps
, 1992
"... We describe a new technique for dynamically maintaining the trapezoidal decomposition of a connected planar map M with 7 ~ vertices, and apply it to the development of a unified dynamic data structure that supports pointlocation, rayshooting, and shortestpath queries in M. The space requirement i ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
We describe a new technique for dynamically maintaining the trapezoidal decomposition of a connected planar map M with 7 ~ vertices, and apply it to the development of a unified dynamic data structure that supports pointlocation, rayshooting, and shortestpath queries in M. The space requirement is O(nlog n). Pointlocation queries take time O(log 7~). Rayshooting and shortestpath queries take time O(log3 TZ) (plus O(k) time if the k edges of the shortest path are reported in addition to its length). Updates consist of insertions and deletions of vertices and edges, and take O(log3 n) time (amortized for vertex updates).
Exponential structures for efficient cacheoblivious algorithms
 In Proceedings of the 29th International Colloquium on Automata, Languages and Programming
, 2002
"... Abstract. We present cacheoblivious data structures based upon exponential structures. These data structures perform well on a hierarchical memory but do not depend on any parameters of the hierarchy, including the block sizes and number of blocks at each level. The problems we consider are searchi ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
Abstract. We present cacheoblivious data structures based upon exponential structures. These data structures perform well on a hierarchical memory but do not depend on any parameters of the hierarchy, including the block sizes and number of blocks at each level. The problems we consider are searching, partial persistence and planar point location. On a hierarchical memory where data is transferred in blocks of size B, some of the results we achieve are: – We give a linearspace data structure for dynamic searching that supports searches and updates in optimal O(log B N) worstcase I/Os, eliminating amortization from the result of Bender, Demaine, and FarachColton (FOCS ’00). We also consider finger searches and updates and batched searches. – We support partiallypersistent operations on an ordered set, namely, we allow searches in any previous version of the set and updates to the latest version of the set (an update creates a new version of the set). All operations take an optimal O(log B (m + N)) amortized I/Os, where N is the size of the version being searched/updated, and m is the number of versions. – We solve the planar point location problem in linear space, taking optimal O(log B N) I/Os for point location queries, where N is the number of line segments specifying the partition of the plane. The preprocessing requires O((N/B) log M/B N) I/Os, where M is the size of the ‘inner ’ memory. 1
General balanced trees
 Journal of Algorithms
, 1999
"... We show that, in order to achieve efficient maintenance of a balanced binary search tree, no shape restriction other than a logarithmic height is required. The obtained class of trees, general balanced trees, may be maintained at a logarithmic amortized cost with no balance information stored in the ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
We show that, in order to achieve efficient maintenance of a balanced binary search tree, no shape restriction other than a logarithmic height is required. The obtained class of trees, general balanced trees, may be maintained at a logarithmic amortized cost with no balance information stored in the nodes. Thus, in the case when amortized bounds are sufficient, there is no need for sophisticated balance criteria. The maintenance algorithms use partial rebuilding. This is important for certain applications and has previously been used with weightbalanced trees. We show that the amortized cost incurred by general balanced trees is lower than what has been shown for weightbalanced trees. � 1999 Academic Press 1.
Sweep as a Generic Pruning Technique Applied to the NonOverlapping Rectangles Constraint
 Seventh International Conference on Principles and Practice of Constraint Programming, LNCS 2239
, 2001
"... We rst present a generic pruning technique which aggregates several constraints sharing some variables. The method is derived from an idea called sweep which is extensively used in computational geometry. A rst benet of this technique comes from the fact that it can be applied on several familie ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
We rst present a generic pruning technique which aggregates several constraints sharing some variables. The method is derived from an idea called sweep which is extensively used in computational geometry. A rst benet of this technique comes from the fact that it can be applied on several families of global constraints. A second main advantage is that it does not lead to any memory consumption problem since it only requires temporary memory that can be reclaimed after each invocation of the method.
New Constructions for Perfect Hash Families and Related Structures using Combinatorial Designs
 J. COMBIN. DESIGNS
, 1999
"... In this paper, we consider explicit constructions of perfect hash families using combinatorial methods. We provide several direct constructions from combinatorial structures related to orthogonal arrays. We also simplify and generalize a recursive construction due to Atici, Magliversas, Stinson and ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
In this paper, we consider explicit constructions of perfect hash families using combinatorial methods. We provide several direct constructions from combinatorial structures related to orthogonal arrays. We also simplify and generalize a recursive construction due to Atici, Magliversas, Stinson and Wei [3]. Using similar methods, we also obtain efficient constructions for separating hash families which result in improved existence results for structures such as separating systems, key distribution patterns, group testing algorithms, coverfree families and secure frameproof codes.
String Editing and Longest Common Subsequences
 In Handbook of Formal Languages
, 1996
"... this paper, in view of the particularly rich variety of algorithmic solutions that have been devised for this problem over the past two decades or so, which made it susceptible to some degrees of unification and systematization of independent and general interest. Our discussion starts with the expo ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
this paper, in view of the particularly rich variety of algorithmic solutions that have been devised for this problem over the past two decades or so, which made it susceptible to some degrees of unification and systematization of independent and general interest. Our discussion starts with the exposition of two basic approaches to LCS computation, due respectively to Hirschberg [1978] and Hunt and Szymanski [1977]. We then discuss faster implementations of this second paradigm, and the data strucures that support them. In Section 5. we discuss algorithms that use only linear space to compute an LCS and yet do not necessarily take \Theta(nm) time. One, final, such algorithm is presented in section 6. where many of the ideas and tools accumulated in the course of our discussion find employment together. In Section 7. we make return to string editing in its general formulation and discuss some of its efficient solutions within a parallel model of computation.
Simple and spaceefficient minimal perfect hash functions
 In Proc. of the 10th Intl. Workshop on Data Structures and Algorithms
, 2007
"... Abstract. A perfect hash function (PHF) h: U → [0, m − 1] for a key set S is a function that maps the keys of S to unique values. The minimum amount of space to represent a PHF for a given set S is known to be approximately 1.44n 2 /m bits, where n = S. In this paper we present new algorithms for ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
Abstract. A perfect hash function (PHF) h: U → [0, m − 1] for a key set S is a function that maps the keys of S to unique values. The minimum amount of space to represent a PHF for a given set S is known to be approximately 1.44n 2 /m bits, where n = S. In this paper we present new algorithms for construction and evaluation of PHFs of a given set (for m = n and m = 1.23n), with the following properties: 1. Evaluation of a PHF requires constant time. 2. The algorithms are simple to describe and implement, and run in linear time. 3. The amount of space needed to represent the PHFs is around a factor 2 from the information theoretical minimum. No previously known algorithm has these properties. To our knowledge, any algorithm in the literature with the third property either: – Requires exponential time for construction and evaluation, or – Uses nearoptimal space only asymptotically, for extremely large n.
Dynamization of the Trapezoid Method for Planar Point Location
, 1991
"... We present a fully dynamic data structure for point location in a monotone subdivision, based on the trapezoid method. The operations supported are insertion and deletion of vertices and edges, and horizontal translation of vertices. Let n be the current number of vertices of the subdivision. Point ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
We present a fully dynamic data structure for point location in a monotone subdivision, based on the trapezoid method. The operations supported are insertion and deletion of vertices and edges, and horizontal translation of vertices. Let n be the current number of vertices of the subdivision. Point location queries take O(log n) time, while updates take O(log2 n) time. The space requirement is O(n log n). This is the first fully dynamic point location data structure for monotone subdivisions that achieves optimal query time.
On parallel integer sorting
 Acta Informatica
, 1992
"... Abstract. We present an optimal algorithm for sorting n integers in the range [1,n c] (for any constant c) fortheEREW PRAM model where the word length is n ɛ, for any ɛ>0.Using this algorithm, the best known upper bound for integer sorting on the (O(log n) word length) EREW PRAM model is improved. I ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
Abstract. We present an optimal algorithm for sorting n integers in the range [1,n c] (for any constant c) fortheEREW PRAM model where the word length is n ɛ, for any ɛ>0.Using this algorithm, the best known upper bound for integer sorting on the (O(log n) word length) EREW PRAM model is improved. In addition, a novel parallel range reduction algorithm which results in a near optimal randomized integer sorting algorithm is presented. For the case when the keys are uniformly distributed integers in an arbitrary range, we give an algorithm whose expected running time is optimal.