Results 1  10
of
82
Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web
 In Proc. 29th ACM Symposium on Theory of Computing (STOC
, 1997
"... We describe a family of caching protocols for distributed networks that can be used to decrease or eliminate the occurrence of hot spots in the network. Our protocols are particularly designed for use with very large networks such as the Internet, where delays caused by hot spots can be severe, and ..."
Abstract

Cited by 529 (12 self)
 Add to MetaCart
We describe a family of caching protocols for distributed networks that can be used to decrease or eliminate the occurrence of hot spots in the network. Our protocols are particularly designed for use with very large networks such as the Internet, where delays caused by hot spots can be severe, and where it is not feasible for every server to have complete information about the current state of the entire network. The protocols are easy to implement using existing network protocols such as TCP/IP, and require very little overhead. The protocols work with local control, make efficient use of existing resources, and scale gracefully as the network grows. Our caching protocols are based on a special kind of hashing that we call consistent hashing. Roughly speaking, a consistent hash function is one which changes minimally as the range of the function changes. Through the development of good consistent hash functions, we are able to develop caching protocols which do not require users to have a current or even consistent view of the network. We believe that consistent hash functions may eventually prove to be useful in other applications such as distributed name servers and/or quorum systems. 1
Randomness is Linear in Space
 Journal of Computer and System Sciences
, 1993
"... We show that any randomized algorithm that runs in space S and time T and uses poly(S) random bits can be simulated using only O(S) random bits in space S and time T poly(S). A deterministic simulation in space S follows. Of independent interest is our main technical tool: a procedure which extracts ..."
Abstract

Cited by 233 (19 self)
 Add to MetaCart
We show that any randomized algorithm that runs in space S and time T and uses poly(S) random bits can be simulated using only O(S) random bits in space S and time T poly(S). A deterministic simulation in space S follows. Of independent interest is our main technical tool: a procedure which extracts randomness from a defective random source using a small additional number of truly random bits. 1
Simple Efficient Load Balancing algorithms for PeertoPeer Systems
 SPAA'04
, 2004
"... Load balancing is a critical issue for the efficient operation of peertopeer networks. We give two new loadbalancing protocols whose provable performance guarantees are within a constant factor of optimal. Our protocols refine the consistent hashing data structure that underlies the Chord (and Ko ..."
Abstract

Cited by 140 (0 self)
 Add to MetaCart
Load balancing is a critical issue for the efficient operation of peertopeer networks. We give two new loadbalancing protocols whose provable performance guarantees are within a constant factor of optimal. Our protocols refine the consistent hashing data structure that underlies the Chord (and Koorde) P2P network. Both preserve Chord’s logarithmic query time and nearoptimal data migration cost. Consistent hashing is an instance of the distributed hash table (DHT) paradigm for assigning items to nodes in a peertopeer system: items and nodes are mapped to a common address space, and nodes have to store all items residing closeby in the address space. Our first protocol balances the distribution of the key address space to nodes, which yields a loadbalanced system when the DHT maps items “randomly” into the address space. To our knowledge, this yields the first P2P scheme simultaneously achieving O(log n) degree, O(log n) lookup cost, and constantfactor load balance (previous schemes settled for any two of the three). Our second protocol aims to directly balance the distribution of items among the nodes. This is useful when the distribution of items in the address space cannot be randomized. We give a simple protocol that balances load by moving nodes to arbitrary locations “where they are needed.” As an application, we use the last protocol to give an optimal implementation of a distributed data structure for range searches on ordered data.
Improved Approximation Algorithms for Shop Scheduling Problems
, 1994
"... In the job shop scheduling problem we are given m machines and n jobs; a job consists of a sequence of operations, each of which must be processed on a specified machine; the objective is to complete all jobs as quickly as possible. This problem is strongly NPhard even for very restrictive special ..."
Abstract

Cited by 82 (7 self)
 Add to MetaCart
In the job shop scheduling problem we are given m machines and n jobs; a job consists of a sequence of operations, each of which must be processed on a specified machine; the objective is to complete all jobs as quickly as possible. This problem is strongly NPhard even for very restrictive special cases. We give the first randomized and deterministic polynomialtime algorithms that yield polylogarithmic approximations to the optimal length schedule. Our algorithms also extend to the more general case where a job is given not by a linear ordering of the machines on which it must be processed but by an arbitrary partial order. Comparable bounds can also be obtained when there are m 0 types of machines, a specified number of machines of each type, and each operation must be processed on one of the machines of a specified type, as well as for the problem of scheduling unrelated parallel machines subject to chain precedence constraints. Key Words: scheduling, approximation algorithms AM...
The Distance2 Matching Problem and its Relationship to the MACLayer Capacity of Ad Hoc Wireless Networks
 IEEE Journal on Selected Areas in Communications
, 2004
"... Abstract—We consider the problem of determining the maximum capacity of the media access (MAC) layer in wireless ad hoc networks. Due to spatial contention for the shared wireless medium, not all nodes can concurrently transmit packets to each other in these networks. The maximum number of possible ..."
Abstract

Cited by 42 (5 self)
 Add to MetaCart
Abstract—We consider the problem of determining the maximum capacity of the media access (MAC) layer in wireless ad hoc networks. Due to spatial contention for the shared wireless medium, not all nodes can concurrently transmit packets to each other in these networks. The maximum number of possible concurrent transmissions is, therefore, an estimate of the maximum network capacity, and depends on the MAC protocol being used. We show that for a large class of MAC protocols based on virtual carrier sensing using RTS/CTS messages, which includes the popular IEEE 802.11 standard, this problem may be modeled as a maximum Distance2 matching (D2EMIS) in the underlying wireless network: Given a graph @ A, find a set of edges such that no two edges in are connected by another edge in. D2EMIS is NPcomplete. Our primary goal is to show that it
An Extension of the Lovász Local Lemma, and its Applications to Integer Programming
 In Proceedings of the 7th Annual ACMSIAM Symposium on Discrete Algorithms
, 1996
"... The Lov'asz Local Lemma (LLL) is a powerful tool in proving the existence of rare events. We present an extension of this lemma, which works well when the event to be shown to exist is a conjunction of individual events, each of which asserts that a random variable does not deviate much from its mea ..."
Abstract

Cited by 31 (6 self)
 Add to MetaCart
The Lov'asz Local Lemma (LLL) is a powerful tool in proving the existence of rare events. We present an extension of this lemma, which works well when the event to be shown to exist is a conjunction of individual events, each of which asserts that a random variable does not deviate much from its mean. We consider three classes of NPhard integer programs: minimax, packing, and covering integer programs. A key technique, randomized rounding of linear relaxations, was developed by Raghavan & Thompson to derive good approximation algorithms for such problems. We use our extended LLL to prove that randomized rounding produces, with nonzero probability, much better feasible solutions than known before, if the constraint matrices of these integer programs are sparse (e.g., VLSI routing using short paths, problems on hypergraphs with small dimension/degree). We also generalize the method of pessimistic estimators due to Raghavan, to constructivize our packing and covering results. 1
New Algorithmic Aspects Of The Local Lemma With Applications To Routing And Partitioning
"... . The Lov'asz Local Lemma (LLL) is a powerful tool that is increasingly playing a valuable role in computer science. The original lemma was nonconstructive; a breakthrough of Beck and its generalizations (due to Alon and Molloy & Reed) have led to constructive versions. However, these methods do not ..."
Abstract

Cited by 30 (5 self)
 Add to MetaCart
. The Lov'asz Local Lemma (LLL) is a powerful tool that is increasingly playing a valuable role in computer science. The original lemma was nonconstructive; a breakthrough of Beck and its generalizations (due to Alon and Molloy & Reed) have led to constructive versions. However, these methods do not capture some classes of applications of the LLL. We make progress on this, by providing algorithmic approaches to two families of applications of the LLL. The first provides constructive versions of certain applications of an extension of the LLL (modeling, e.g., hypergraphpartitioning and lowcongestion routing problems); the second provides new algorithmic results on constructing disjoint paths in graphs. Our results can also be seen as constructive upper bounds on the integrality gap of certain packing problems. One common theme of our work is a "gradual rounding" approach.
Better approximation guarantees for jobshop scheduling
 SIAM Journal on Discrete Mathematics
, 1997
"... Abstract. Jobshop scheduling is a classical NPhard problem. Shmoys, Stein, and Wein presented the first polynomialtime approximation algorithm for this problem that has a good (polylogarithmic) approximation guarantee. We improve the approximation guarantee of their work and present further impro ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
Abstract. Jobshop scheduling is a classical NPhard problem. Shmoys, Stein, and Wein presented the first polynomialtime approximation algorithm for this problem that has a good (polylogarithmic) approximation guarantee. We improve the approximation guarantee of their work and present further improvements for some important NPhard special cases of this problem (e.g., in the preemptive case where machines can suspend work on operations and later resume). We also present NC algorithms with improved approximation guarantees for some NPhard special cases.
Short Shop Schedules
, 1995
"... We consider the open shop, job shop, and flow shop scheduling problems with integral processing times. We give polynomialtime algorithms to determine if an instance has a schedule of length at most 3, and show that deciding if there is a schedule of length at most 4 is NP complete. The latter res ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
We consider the open shop, job shop, and flow shop scheduling problems with integral processing times. We give polynomialtime algorithms to determine if an instance has a schedule of length at most 3, and show that deciding if there is a schedule of length at most 4 is NP complete. The latter result implies that, unless P = NP, there does not exist a polynomialtime approximation algorithm for any of these problems that constructs a schedule with length guaranteed to be strictly less than 5/4 times the optimal length. This work constitutes the first nontrivial theoretical evidence that shop scheduling problems are hard to solve even approximately.
Uniform Generation of NPwitnesses using an NPoracle
 Information and Computation
, 1997
"... A Uniform Generation procedure for NP is an algorithm which given any input in a fixed NPlanguage, outputs a uniformly distributed NPwitness for membership of the input in the language. We present a Uniform Generation procedure for NP that runs in probabilistic polynomialtime with an NPoracle. T ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
A Uniform Generation procedure for NP is an algorithm which given any input in a fixed NPlanguage, outputs a uniformly distributed NPwitness for membership of the input in the language. We present a Uniform Generation procedure for NP that runs in probabilistic polynomialtime with an NPoracle. This improves upon results of Jerrum, Valiant and Vazirani, which either require a \Sigma P 2 oracle or obtain only almost uniform generation. Our procedure utilizes ideas originating in the works of Sipser, Stockmeyer, and Jerrum, Valiant and Vazirani. Dept. of Computer Science & Engineering, University of California at San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA. EMail: mihir@cs.ucsd.edu. URL: http://wwwcse.ucsd.edu/users/mihir. Supported in part by NSF CAREER Award CCR9624439 and a 1996 Packard Foundation Fellowship in Science and Engineering. y Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot, Israel. EMail: oded@wis...