Results 1 
9 of
9
CONTENTION RESOLUTION IN HASHING BASED SHARED MEMORY SIMULATIONS
, 2000
"... In this paper we study the problem of simulating shared memory on the distributed memory machine (DMM). Our approach uses multiple copies of shared memory cells, distributed among the memory modules of the DMM via universal hashing. The main aim is to design strategies that resolve contention at th ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
In this paper we study the problem of simulating shared memory on the distributed memory machine (DMM). Our approach uses multiple copies of shared memory cells, distributed among the memory modules of the DMM via universal hashing. The main aim is to design strategies that resolve contention at the memory modules. Extending results and methods from random graphs and very fast randomized algorithms, we present new simulation techniques that enable us to improve the previously best results exponentially. In particular, we show that an nprocessor CRCW PRAM can be simulated by an nprocessor DMM with delay O(log log log n log ∗ n), with high probability. Next we describe a general technique that can be used to turn these simulations into timeprocessor optimal ones, in the case of EREW PRAMs to be simulated. We obtain a timeprocessor optimal simulation of an (n log log log n log ∗ n)processor EREW PRAM on an nprocessor DMM with delay O(log log log n log ∗ n), with high probability. When an (n log log log n log ∗ n)processor CRCW PRAM is simulated, the delay is only by a log ∗ n factor larger. We further demonstrate that the simulations presented can not be significantly improved using our techniques. We show an Ω(log log log n / log log log log n) lower bound on the expected delay for a class of PRAM simulations, called topological simulations, that covers all previously known simulations as well as the simulations presented in the paper.
6.897: Advanced data structures (Spring 2005), Lecture 3, February 8
, 2005
"... Recall from last lecture that we are looking at the documentretrieval problem. The problem can be stated as follows: Given a set of texts T1, T2,..., Tk and a pattern P, determine the distinct texts in which the patterns occurs. In particular, we are allowed to preprocess the texts in order to be a ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Recall from last lecture that we are looking at the documentretrieval problem. The problem can be stated as follows: Given a set of texts T1, T2,..., Tk and a pattern P, determine the distinct texts in which the patterns occurs. In particular, we are allowed to preprocess the texts in order to be able to answer the query faster. Our preprocessing choice was the use of a single suffix tree, in which all the suffixes of all the texts appear, each suffix ending with a distinct symbol that determines the text in which the suffix appears. In order to answer the query we reduced the problem to rangemin queries, which in turn was reduced to the least common ancestor (LCA) problem on the cartesian tree of an array of numbers. The cartesian tree is constructed recursively by setting its root to be the minimum element of the array and recursively constructing its two subtrees using the left and right partitions of the array. The rangemin query of an interval [i, j] is then equivalent to finding the LCA of the two nodes of the cartesian tree that correspond to i and j. In this lecture we continue to see how we can solve the LCA problem on any static tree. This will involve a reduction of the LCA problem back to the rangemin query problem (!) and then a
COMBINATORICA Bolyai Society – SpringerVerlag COMBINATORICA 18 (1) (1998) 121–132 NONEXPANSIVE HASHING
, 1996
"... In a nonexpansive hashing scheme, similar inputs are stored in memory locations which are close. We develop a nonexpansive hashing scheme wherein any set of size O(R 1−ε) from a large universe may be stored in a memory of size R (any ε>0, and R>R 0(ɛ)), and where retrieval takes O(1) operati ..."
Abstract
 Add to MetaCart
(Show Context)
In a nonexpansive hashing scheme, similar inputs are stored in memory locations which are close. We develop a nonexpansive hashing scheme wherein any set of size O(R 1−ε) from a large universe may be stored in a memory of size R (any ε>0, and R>R 0(ɛ)), and where retrieval takes O(1) operations. We explain how to use nonexpansive hashing schemes for efficient storage and retrieval of noisy data. A dynamic version of this hashing scheme is presented as well. 1.
A reliable randomized algorithm for the . . .
, 1997
"... The following two computational problems are studied: Duplicate grouping: Assume that n items are given, each of which is labeled by an integer key from the set 0,..., U � 1 4. Store the items in an array of size n such that items with the same key occupy a contiguous segment of the array. Closest p ..."
Abstract
 Add to MetaCart
The following two computational problems are studied: Duplicate grouping: Assume that n items are given, each of which is labeled by an integer key from the set 0,..., U � 1 4. Store the items in an array of size n such that items with the same key occupy a contiguous segment of the array. Closest pair: Assume that a multiset of n points in the ddimensional Euclidean space is given, where d � 1 is a fixed integer. Each point is represented as a dtuple of integers in the range 0,..., U � 14 Ž or of arbitrary real numbers.. Find a closest pair, i.e., a pair of points whose distance is minimal over all such pairs.
SIMPLE FAST PARALLEL HASHING BY OBLIVIOUS EXECUTION
"... Abstract. A hash table is a representation of a set in a linear size data structure that supports constanttime membership queries. We show how to construct a hash table for any given set of n keys in O(lg lg n) parallel time with high probability, using n processors on a weak version of a concurren ..."
Abstract
 Add to MetaCart
Abstract. A hash table is a representation of a set in a linear size data structure that supports constanttime membership queries. We show how to construct a hash table for any given set of n keys in O(lg lg n) parallel time with high probability, using n processors on a weak version of a concurrentread concurrentwrite parallel random access machine (crcw pram). Our algorithm uses a novel approach of hashing by \oblivious execution " based on probabilistic analysis. The algorithm is simple and has the following structure: 1. Partition the input set into buckets by a random polynomial of constant degree. 2. For t: = 1 to O(lg lg n) do (a) Allocate Mt memory blocks, each of size Kt. (b) Let each bucket select a block at random, and try to injectively map its keys into the block using a random linear function. Buckets that fail carry on to the next iteration. The crux of the algorithm is a careful a priori selection of the parameters Mt and Kt. The algorithm uses only O(lg lg n) random words and can be implemented in a workecient manner.
1Deamplification of DoS Attacks via Puzzles
, 2004
"... Abstract — Puzzles have been proposed as a mechanism to deamplify denial of service attacks against a server’s memory and processing resources. For example, HIP implements a cookie puzzle mechanism to protect the server from wasting resources performing DiffieHellman exponentiation in response to ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract — Puzzles have been proposed as a mechanism to deamplify denial of service attacks against a server’s memory and processing resources. For example, HIP implements a cookie puzzle mechanism to protect the server from wasting resources performing DiffieHellman exponentiation in response to spurious requests. We examine cookie puzzle mechanisms of this type. We find that careful attention is needed in server implementation to ensure that an attacker does not retain opportunities to amplify the attack despite the puzzle mechanism, and present a design which addresses these issues. We compare vulnerability to bandwidth and processing attacks, determining when one dominates the other. Finally, we quantify the deamplification of DoS attacks provided by a cookie puzzle mechanism and determine the best setting for puzzle difficulty under a steadystate attack. I.
Lineartime algorithms to color topological graphs
, 2005
"... We describe a lineartime algorithm for 4coloring planar graphs. We indeed give an O(V + E + χ  + 1)time algorithm to Ccolor Vvertex Eedge graphs embeddable on a 2manifold M of Euler characteristic χ where C(M) is given by Heawood’s (minimax optimal) formula. Also we show how, in O(V + E) ..."
Abstract
 Add to MetaCart
We describe a lineartime algorithm for 4coloring planar graphs. We indeed give an O(V + E + χ  + 1)time algorithm to Ccolor Vvertex Eedge graphs embeddable on a 2manifold M of Euler characteristic χ where C(M) is given by Heawood’s (minimax optimal) formula. Also we show how, in O(V + E) time, to find the exact chromatic number of a maximal planar graph (one with E = 3V − 6) and a coloring achieving it. Finally, there is a lineartime algorithm to 5color a graph embedded on any fixed surface M except that an Mdependent constant number of vertices are left uncolored. All the algorithms are simple and practical and run on a deterministic pointer machine, except for planar graph 4coloring which involves enormous constant factors and requires an integer RAM with a random number generator. All of the algorithms mentioned so far are in the ultraparallelizable deterministic computational complexity class “NC. ” We also have more practical planar 4coloring algorithms that can run on pointer machines in O(V log V) randomized time and O(V) space, and a very simple deterministic O(V)time coloring algorithm for planar graphs which conjecturally uses 4 colors.
Efficient PolynomialTime Algorithms for the Constrained LCS Problem with Strings Exclusion ∗
, 2012
"... Key words: design of algorithms, longest common subsequence, constrained LCS, NPhard, finite automata In this paper, we revisit a recent variant of the longest common subsequence (LCS) problem, the stringexcluding constrained LCS (STRECLCS) problem, which was first addressed by Chen and Chao [8] ..."
Abstract
 Add to MetaCart
(Show Context)
Key words: design of algorithms, longest common subsequence, constrained LCS, NPhard, finite automata In this paper, we revisit a recent variant of the longest common subsequence (LCS) problem, the stringexcluding constrained LCS (STRECLCS) problem, which was first addressed by Chen and Chao [8]. Given two sequences X and Y of lengths m and n, respectively, and a constraint string P of length r, we are to find a common subsequence Z of X and Y which excludes P as a substring and the length of Z is maximized. In fact, this problem cannot be correctly solved by the previously proposed algorithm. Thus, we give a correct algorithm with O(mnr) time to solve it. Then, we revisit the STRECLCS problem with multiple constraints {P1, P2, · · · , Pk}. We propose a polynomialtime algorithm which runs in O(mnR) time, where R = ∑k i=1 Pi, and thus it overthrows the previous claim of NPhardness. 1
July 10, 2012 17:26 WSPC/INSTRUCTION FILE afl11 UNWEIGHTED AND WEIGHTED HYPERMINIMIZATION
, 2012
"... Hyperminimization of deterministic finite automata (dfa) is a recently introduced state reduction technique that allows a finite change in the recognized language. A generalization of this lossy compression method to the weighted setting over semifields is presented, which allows the recognized we ..."
Abstract
 Add to MetaCart
(Show Context)
Hyperminimization of deterministic finite automata (dfa) is a recently introduced state reduction technique that allows a finite change in the recognized language. A generalization of this lossy compression method to the weighted setting over semifields is presented, which allows the recognized weighted language to differ for finitely many input strings. First, the structure of hyperminimal deterministic weighted finite automata is characterized in a similar way as in classical weighted minimization and unweighted hyperminimization. Second, an efficient hyperminimization algorithm, which runs in time O(n logn), is derived from this characterization. Third, the closure properties of canonical regular languages, which are languages recognized by hyperminimal dfa, are investigated. Finally, some recent results in the area of hyperminimization are recalled. 1.