Results 1 
5 of
5
On Universal Classes of Extremely Random Constant Time Hash Functions and Their TimeSpace Tradeoff
"... A family of functions F that map [0; n] 7! [0; n], is said to be hwise independent if any h points in [0; n] have an image, for randomly selected f 2 F , that is uniformly distributed. This paper gives both probabilistic and explicit randomized constructions of n ffl wise independent functions, ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
A family of functions F that map [0; n] 7! [0; n], is said to be hwise independent if any h points in [0; n] have an image, for randomly selected f 2 F , that is uniformly distributed. This paper gives both probabilistic and explicit randomized constructions of n ffl wise independent functions, ffl ! 1, that can be evaluated in constant time for the standard random access model of computation. Simple extensions give comparable behavior for larger domains. As a consequence, many probabilistic algorithms can for the first time be shown to achieve their expected asymptotic performance for a feasible model of computation. This paper also establishes a tight tradeoff in the number of random seeds that must be precomputed for a random function that runs in time T and is hwise independent. Categories and Subject Descriptors: E.2 [Data Storage Representation]: Hashtable representation; F.1.2 [Modes of Computation]: Probabilistic Computation; F2.3 [Tradepffs among Computational Measures]...
An Optical Simulation of Shared Memory
 In Proceedings of the 6th Annual ACM Symposium on Parallel Algorithms and Architectures
, 1994
"... We present a workoptimal randomized algorithm for simulating a shared memory machine (pram) on an optical communication parallel computer (ocpc). The ocpc model is motivated by the potential of optical communication for parallel computation. The memory of an ocpc is divided into modules, one module ..."
Abstract
 Add to MetaCart
We present a workoptimal randomized algorithm for simulating a shared memory machine (pram) on an optical communication parallel computer (ocpc). The ocpc model is motivated by the potential of optical communication for parallel computation. The memory of an ocpc is divided into modules, one module per processor. Each memory module only services a request on a timestep if it receives exactly one memory request. Our algorithm simulates each step of an n lg lg nprocessor erew pram on an nprocessor ocpc in O(lg lg n) expected delay. (The probability that the delay is longer than this is at most n \Gammaff for any constant ff.) The best previous simulation, due to Valiant, required \Theta(lg n) expected delay. 1 Introduction The huge bandwidth of the optical medium makes it possible to use optics to build communication networks of very high degree. Eshaghian [8, 9] first studied the computational aspects of parallel architectures with complete optical interconnection networks. The ...
Efficient Interconnection Schemes for VLSI and Parallel Computation
, 1989
"... This thesis is primarily concerned with two problems of interconnecting components in VLSI technologies. In the first case, the goal is to construct efficient interconnection networks for generalpurpose parallel computers. The second problem is a more specialized problem in the design of VLSI chips ..."
Abstract
 Add to MetaCart
This thesis is primarily concerned with two problems of interconnecting components in VLSI technologies. In the first case, the goal is to construct efficient interconnection networks for generalpurpose parallel computers. The second problem is a more specialized problem in the design of VLSI chips, namely multilayer channel routing. In addition, a final part of this thesis provides lower bounds on the area required for VLSI implementations of finitestate machines. This thesis shows that networks based on Leiserson's fattree architecture are nearly as good as any network built in a comparable amount of physical space. It shows that these "universal" networks can efficiently simulate competing networks by means of an appropriate correspondence between network components and efficient algorithms for routing messages on the universal network. In particular, a universal network of area A can simulate competing networks with O(lg 3 A) slowdown (in bittimes), using a very simple rando...
Deterministic PRAM Simulation with Constant Redundancy * (Preliminary Version)
"... Abstract: In this paper, we show that distributing the memory of a parallel computer and, thereby, decreasing its granularity allows a reduction in the redundancy required to achieve polylog simulation time for each PRAM step. Previously, realistic models of parallel computation assigned one memory ..."
Abstract
 Add to MetaCart
Abstract: In this paper, we show that distributing the memory of a parallel computer and, thereby, decreasing its granularity allows a reduction in the redundancy required to achieve polylog simulation time for each PRAM step. Previously, realistic models of parallel computation assigned one memory module to each processor and, as a result, insisted on relatively coarsegrain memory. We propose, on the other hand, a more flexible, but equally valid model of computation, the distributedmemory, boundeddegree network (DMBDN) model. This model allows the use of finegrain memory while maintaining the realism of a boundeddegree interconnection network. We describe a PRAM simulation scheme, which is admitted under the DMBDN model, that exploits the increased memory bandwidth provided by a twodimensional mesh of trees (2DMOT) network to achieve an overhead in memory redundancy lower than that required by other fast, deterministic PRAM simulations. Specifically, for a deterministic simulation of an nprocessor PRAM on a boundeddegree network, we are able to reduce the number of copies of each variable from O(logn/loglogn) to ®(1) and still simulate each PRAM step in polylog time. 1.
"Interactive Animation of Fault Tolerant Parallel Algorithms" by Scott W. ApgarInteractive Animation of Fault Tolerant Parallel Algorithms *
, 1992
"... Animation of algorithms makes understanding them intuitively easier. This paper describes the software tool Raft (Robust Animator of Fault Tolerant Algorithms). The Raft system allows the user to animate a number of parallel algorithms which achieve fault tolerant execution. In particular, we use it ..."
Abstract
 Add to MetaCart
Animation of algorithms makes understanding them intuitively easier. This paper describes the software tool Raft (Robust Animator of Fault Tolerant Algorithms). The Raft system allows the user to animate a number of parallel algorithms which achieve fault tolerant execution. In particular, we use it to illustrate the key WriteAll problem. It has an extensive userinterface which allows a choice of the number of processors, the number of elements in the WriteAll array, and the adversary to control the processor failures. The novelty of the system is that the interface allows the user to create new online adversaries as the algorithm executes. Submitted in partial fulfillment of the rquirements for the