Results 1  10
of
128
Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web
 IN PROC. 29TH ACM SYMPOSIUM ON THEORY OF COMPUTING (STOC
, 1997
"... We describe a family of caching protocols for distributed networks that can be used to decrease or eliminate the occurrence of hot spots in the network. Our protocols are particularly designed for use with very large networks such as the Internet, where delays caused by hot spots can be severe, and ..."
Abstract

Cited by 699 (10 self)
 Add to MetaCart
(Show Context)
We describe a family of caching protocols for distributed networks that can be used to decrease or eliminate the occurrence of hot spots in the network. Our protocols are particularly designed for use with very large networks such as the Internet, where delays caused by hot spots can be severe, and where it is not feasible for every server to have complete information about the current state of the entire network. The protocols are easy to implement using existing network protocols such as TCP/IP, and require very little overhead. The protocols work with local control, make efficient use of existing resources, and scale gracefully as the network grows. Our caching protocols are based on a special kind of hashing that we call consistent hashing. Roughly speaking, a consistent hash function is one which changes minimally as the range of the function changes. Through the development of good consistent hash functions, we are able to develop caching protocols which do not require users to have a current or even consistent view of the network. We believe that consistent hash functions may eventually prove to be useful in other applications such as distributed name servers and/or quorum systems.
Randomness is Linear in Space
 Journal of Computer and System Sciences
, 1993
"... We show that any randomized algorithm that runs in space S and time T and uses poly(S) random bits can be simulated using only O(S) random bits in space S and time T poly(S). A deterministic simulation in space S follows. Of independent interest is our main technical tool: a procedure which extracts ..."
Abstract

Cited by 242 (20 self)
 Add to MetaCart
We show that any randomized algorithm that runs in space S and time T and uses poly(S) random bits can be simulated using only O(S) random bits in space S and time T poly(S). A deterministic simulation in space S follows. Of independent interest is our main technical tool: a procedure which extracts randomness from a defective random source using a small additional number of truly random bits. 1
Simple Efficient Load Balancing algorithms for PeertoPeer Systems
 SPAA'04
, 2004
"... Load balancing is a critical issue for the efficient operation of peertopeer networks. We give two new loadbalancing protocols whose provable performance guarantees are within a constant factor of optimal. Our protocols refine the consistent hashing data structure that underlies the Chord (and Ko ..."
Abstract

Cited by 204 (1 self)
 Add to MetaCart
Load balancing is a critical issue for the efficient operation of peertopeer networks. We give two new loadbalancing protocols whose provable performance guarantees are within a constant factor of optimal. Our protocols refine the consistent hashing data structure that underlies the Chord (and Koorde) P2P network. Both preserve Chord’s logarithmic query time and nearoptimal data migration cost. Consistent hashing is an instance of the distributed hash table (DHT) paradigm for assigning items to nodes in a peertopeer system: items and nodes are mapped to a common address space, and nodes have to store all items residing closeby in the address space. Our first protocol balances the distribution of the key address space to nodes, which yields a loadbalanced system when the DHT maps items “randomly” into the address space. To our knowledge, this yields the first P2P scheme simultaneously achieving O(log n) degree, O(log n) lookup cost, and constantfactor load balance (previous schemes settled for any two of the three). Our second protocol aims to directly balance the distribution of items among the nodes. This is useful when the distribution of items in the address space cannot be randomized. We give a simple protocol that balances load by moving nodes to arbitrary locations “where they are needed.” As an application, we use the last protocol to give an optimal implementation of a distributed data structure for range searches on ordered data.
Improved Approximation Algorithms for Shop Scheduling Problems
, 1994
"... In the job shop scheduling problem we are given m machines and n jobs; a job consists of a sequence of operations, each of which must be processed on a specified machine; the objective is to complete all jobs as quickly as possible. This problem is strongly NPhard even for very restrictive special ..."
Abstract

Cited by 90 (7 self)
 Add to MetaCart
(Show Context)
In the job shop scheduling problem we are given m machines and n jobs; a job consists of a sequence of operations, each of which must be processed on a specified machine; the objective is to complete all jobs as quickly as possible. This problem is strongly NPhard even for very restrictive special cases. We give the first randomized and deterministic polynomialtime algorithms that yield polylogarithmic approximations to the optimal length schedule. Our algorithms also extend to the more general case where a job is given not by a linear ordering of the machines on which it must be processed but by an arbitrary partial order. Comparable bounds can also be obtained when there are m 0 types of machines, a specified number of machines of each type, and each operation must be processed on one of the machines of a specified type, as well as for the problem of scheduling unrelated parallel machines subject to chain precedence constraints. Key Words: scheduling, approximation algorithms AM...
Scheduling Split Intervals
, 2002
"... We consider the problem of scheduling jobs that are given as groups of nonintersecting segments on the real line. Each job Jj is associated with an interval, Ij, which consists of up to t segments, for some t _) 1, a of their segments intersect. Such jobs show up in a I.I Problem Statement and Mo ..."
Abstract

Cited by 63 (5 self)
 Add to MetaCart
We consider the problem of scheduling jobs that are given as groups of nonintersecting segments on the real line. Each job Jj is associated with an interval, Ij, which consists of up to t segments, for some t _) 1, a of their segments intersect. Such jobs show up in a I.I Problem Statement and Motivation. We wide range of applications, including the transmission consider the problem of scheduling jobs that are given of continuousmedia data, allocation of linear resources as groups of nonintersecting segments on the real line. (e.g. bandwidth in linear processor arrays), and in Each job Jj is associated with a tinterval, Ij, which
The Distance2 Matching Problem and its Relationship to the MACLayer Capacity of Ad Hoc Wireless Networks
 IEEE Journal on Selected Areas in Communications
, 2004
"... Abstract—We consider the problem of determining the maximum capacity of the media access (MAC) layer in wireless ad hoc networks. Due to spatial contention for the shared wireless medium, not all nodes can concurrently transmit packets to each other in these networks. The maximum number of possible ..."
Abstract

Cited by 60 (6 self)
 Add to MetaCart
Abstract—We consider the problem of determining the maximum capacity of the media access (MAC) layer in wireless ad hoc networks. Due to spatial contention for the shared wireless medium, not all nodes can concurrently transmit packets to each other in these networks. The maximum number of possible concurrent transmissions is, therefore, an estimate of the maximum network capacity, and depends on the MAC protocol being used. We show that for a large class of MAC protocols based on virtual carrier sensing using RTS/CTS messages, which includes the popular IEEE 802.11 standard, this problem may be modeled as a maximum Distance2 matching (D2EMIS) in the underlying wireless network: Given a graph @ A, find a set of edges such that no two edges in are connected by another edge in. D2EMIS is NPcomplete. Our primary goal is to show that it
A JammingResistant MAC Protocol for SingleHop Wireless Networks
, 2008
"... In this paper we consider the problem of designing a medium access control (MAC) protocol for singlehop wireless networks that is provably robust against adaptive adversarial jamming. The wireless network consists of a set of honest and reliable nodes that are within the transmission range of each ..."
Abstract

Cited by 54 (11 self)
 Add to MetaCart
(Show Context)
In this paper we consider the problem of designing a medium access control (MAC) protocol for singlehop wireless networks that is provably robust against adaptive adversarial jamming. The wireless network consists of a set of honest and reliable nodes that are within the transmission range of each other. In addition to these nodes there is an adversary. The adversary may know the protocol and its entire history and use this knowledge to jam the wireless channel at will at any time. It is allowed to jam a (1 − ɛ)fraction of the time steps, for an arbitrary constant ɛ> 0, but it has to make a jamming decision before it knows the actions of the nodes at the current step. The nodes cannot distinguish between the adversarial jamming or a collision of two or more messages that are sent at the same time. We demonstrate, for the first time, that there is a localcontrol MAC protocol requiring only very limited knowledge about the adversary and the network that achieves a constant throughput for the nonjammed time steps under any adversarial strategy above. We also show that our protocol is very energy efficient and that it can be extended to obtain a robust and efficient protocol for leader election and the fair use of the wireless channel.
Parallel repetition in projection games and a concentration bound
 In Proc. 40th STOC
, 2008
"... In a two player game, a referee asks two cooperating players (who are not allowed to communicate) questions sampled from some distribution and decides whether they win or not based on some predicate of the questions and their answers. The parallel repetition of the game is the game in which the refe ..."
Abstract

Cited by 42 (8 self)
 Add to MetaCart
In a two player game, a referee asks two cooperating players (who are not allowed to communicate) questions sampled from some distribution and decides whether they win or not based on some predicate of the questions and their answers. The parallel repetition of the game is the game in which the referee samples n independent pairs of questions and sends corresponding questions to the players simultaneously. If the players cannot win the original game with probability better than (1 − ǫ), what’s the best they can do in the repeated game? We improve earlier results [Raz98, Hol07], which showed that the players cannot win all copies in the repeated game with probability better than (1 −ǫ 3) Ω(n/c) (here c is the length of the answers in the game), in the following ways: • We prove the bound (1 −ǫ 2) Ω(n) as long as the game is a “projection game”, the type of game most commonly used in hardness of approximation results. Our bound is independent of the answer length and has a better dependence on ǫ. By the recent work of Raz [Raz08], this bound is tight. A consequence of this bound is that the Unique Games Conjecture of Khot [Kho02] is equivalent to: Unique Games Conjecture There is an unbounded increasing function f: R + → R + such that for every ǫ> 0, there exists an alphabet size M(ǫ) for which it is NPhard to distinguish a Unique Game with alphabet size M in which a 1 −ǫ 2 fraction of the constraints can be satisfied from one in which a 1 − ǫf(1/ǫ) fraction of the constraints can be satisfied. • We prove a concentration bound for parallel repetition (of general games) showing that for any constant 0 < δ < ǫ, the probability that the players win a (1 −ǫ+δ) fraction of the games in the parallel repetition is at most exp � −Ω(δ 4 n/c) �. An application of this is in testing Bell Inequalities. Our result implies that the parallel repetition of the CHSH game can be used to get an experiment that has a very large classical versus quantum gap.
Nonmalleable codes
 IN: ICS (2010
"... We introduce the notion of “nonmalleable codes” which relaxes the notion of errorcorrection and errordetection. Informally, a code is nonmalleable if the message contained in a modified codeword is either the original message, or a completely unrelated value. In contrast to errorcorrection and ..."
Abstract

Cited by 42 (5 self)
 Add to MetaCart
We introduce the notion of “nonmalleable codes” which relaxes the notion of errorcorrection and errordetection. Informally, a code is nonmalleable if the message contained in a modified codeword is either the original message, or a completely unrelated value. In contrast to errorcorrection and errordetection, nonmalleability can be achieved for very rich classes of modifications. We construct an efficient code that is nonmalleable with respect to modifications that effect each bit of the codeword arbitrarily (i.e. leave it untouched, flip it or set it to either 0 or 1), but independently of the value of the other bits of the codeword. Using the probabilistic method, we also show a very strong and general statement: there exists a nonmalleable code for every “small enough ” family F of functions via which codewords can be modified. Although this probabilistic method argument does not directly yield efficient constructions, it gives us efficient nonmalleable codes in the randomoracle model for very general classes of tampering functions — e.g. functions where every bit in the tampered codeword can depend arbitrarily on any 99 % of the bits in the original codeword. As an application of nonmalleable codes, we show that they provide an elegant algorithmic solution to the task of protecting functionalities implemented in hardware (e.g. signature cards) against “tampering attacks”. In such attacks, the secret state of a physical system is tampered, in the hopes that future interaction with the modified system will reveal some secret information. This problem, was previously studied in the work of Gennaro et al. in 2004 under the name “algorithmic tamper proof security ” (ATP). We show that nonmalleable codes can be used to achieve important improvements over the prior work. In particular, we show that any functionality can be made secure against a large class of tampering attacks, simply by encoding the secretstate with a nonmalleable code while it is stored in memory.
An Extension of the Lovász Local Lemma, and its Applications to Integer Programming
 In Proceedings of the 7th Annual ACMSIAM Symposium on Discrete Algorithms
, 1996
"... The Lov'asz Local Lemma (LLL) is a powerful tool in proving the existence of rare events. We present an extension of this lemma, which works well when the event to be shown to exist is a conjunction of individual events, each of which asserts that a random variable does not deviate much from it ..."
Abstract

Cited by 38 (7 self)
 Add to MetaCart
(Show Context)
The Lov'asz Local Lemma (LLL) is a powerful tool in proving the existence of rare events. We present an extension of this lemma, which works well when the event to be shown to exist is a conjunction of individual events, each of which asserts that a random variable does not deviate much from its mean. We consider three classes of NPhard integer programs: minimax, packing, and covering integer programs. A key technique, randomized rounding of linear relaxations, was developed by Raghavan & Thompson to derive good approximation algorithms for such problems. We use our extended LLL to prove that randomized rounding produces, with nonzero probability, much better feasible solutions than known before, if the constraint matrices of these integer programs are sparse (e.g., VLSI routing using short paths, problems on hypergraphs with small dimension/degree). We also generalize the method of pessimistic estimators due to Raghavan, to constructivize our packing and covering results. 1