Results 1  10
of
43
The Power of Two Choices in Randomized Load Balancing
 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
, 1996
"... Suppose that n balls are placed into n bins, each ball being placed into a bin chosen independently and uniformly at random. Then, with high probability, the maximum load in any bin is approximately log n log log n . Suppose instead that each ball is placed sequentially into the least full of d ..."
Abstract

Cited by 201 (23 self)
 Add to MetaCart
Suppose that n balls are placed into n bins, each ball being placed into a bin chosen independently and uniformly at random. Then, with high probability, the maximum load in any bin is approximately log n log log n . Suppose instead that each ball is placed sequentially into the least full of d bins chosen independently and uniformly at random. It has recently been shown that the maximum load is then only log log n log d +O(1) with high probability. Thus giving each ball two choices instead of just one leads to an exponential improvement in the maximum load. This result demonstrates the power of two choices, and it has several applications to load balancing in distributed systems. In this thesis, we expand upon this result by examining related models and by developing techniques for stu...
The Power of Two Random Choices: A Survey of Techniques and Results
 in Handbook of Randomized Computing
, 2000
"... ITo motivate this survey, we begin with a simple problem that demonstrates a powerful fundamental idea. Suppose that n balls are thrown into n bins, with each ball choosing a bin independently and uniformly at random. Then the maximum load, or the largest number of balls in any bin, is approximately ..."
Abstract

Cited by 99 (2 self)
 Add to MetaCart
ITo motivate this survey, we begin with a simple problem that demonstrates a powerful fundamental idea. Suppose that n balls are thrown into n bins, with each ball choosing a bin independently and uniformly at random. Then the maximum load, or the largest number of balls in any bin, is approximately log n= log log n with high probability. Now suppose instead that the balls are placed sequentially, and each ball is placed in the least loaded of d 2 bins chosen independently and uniformly at random. Azar, Broder, Karlin, and Upfal showed that in this case, the maximum load is log log n= log d + (1) with high probability [ABKU99]. The important implication of this result is that even a small amount of choice can lead to drastically different results in load balancing. Indeed, having just two random choices (i.e.,...
Pushtopeer videoondemand system: Design and evaluation
 In UMass Computer Science Techincal Report 2006–59
, 2006
"... Number: CRPRL2006110001 ..."
The natural workstealing algorithm is stable
 In Proceedings of the 42nd IEEE Symposium on Foundations of Computer Science (FOCS
, 2001
"... In this paper we analyse a very simple dynamic workstealing algorithm. In the workgeneration model, there are n (work) generators. A generatorallocation function is simply a function from the n generators to the n processors. We consider a fixed, but arbitrary, distribution D over generatoralloca ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
In this paper we analyse a very simple dynamic workstealing algorithm. In the workgeneration model, there are n (work) generators. A generatorallocation function is simply a function from the n generators to the n processors. We consider a fixed, but arbitrary, distribution D over generatorallocation functions. During each timestep of our process, a generatorallocation function h is chosen from D, and the generators are allocated to the processors according to h. Each generator may then generate a unittime task which it inserts into the queue of its host processor. It generates such a task independently with probability λ. After the new tasks are generated, each processor removes one task from its queue and services it. For many choices of D, the workgeneration model allows the load to become arbitrarily imbalanced, even when λ < 1. For example, D could be the point distribution containing a single function h which allocates all of the generators to just one processor. For this choice of D, the chosen processor receives around λn units of work at each step and services one. The natural workstealing algorithm that we analyse is widely used in practical applications and works as follows. During each time step, each empty
Efficient gossipbased aggregate computation
 in Proc. ACM SIGACTSIGMOD Symp. on Principles of Database Systems
"... Recently, there has been a growing interest in gossipbased protocols that employ randomized communication to ensure robust information dissemination. In this paper, we present a novel gossipbased scheme using which all the nodes in an nnode overlay network can compute the common aggregates of MIN ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
Recently, there has been a growing interest in gossipbased protocols that employ randomized communication to ensure robust information dissemination. In this paper, we present a novel gossipbased scheme using which all the nodes in an nnode overlay network can compute the common aggregates of MIN, MAX, SUM, AVERAGE, and RANK of their values using O(n log log n) messages within O(log n log log n) rounds of communication. To the best of our knowledge, ours is the first result that shows how to compute these aggregates with high probability using only O(n log log n) messages. In contrast, the best known gossipbased algorithm for computing these aggregates requires O(n log n) messages and O(log n) rounds. Thus, our algorithm allows system designers to trade off a small increase in round complexity with a significant reduction in message complexity. This can lead to dramatically lower network congestion and longer node lifetimes in wireless and sensor networks, where channel bandwidth and battery life are severely constrained. 1.
On Balls and Bins with Deletions
 In Proc. of the RANDOM'98
, 1998
"... Microsystems. The views and conclusions contained here are those of the authors and should not be interpreted as necessarily representing the official policies or ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
Microsystems. The views and conclusions contained here are those of the authors and should not be interpreted as necessarily representing the official policies or
Parallel Balanced Allocations
 IN PROCEEDINGS OF THE 8TH ANNUAL ACM SYMPOSIUM ON PARALLEL ALGORITHMS AND ARCHITECTURES
, 1996
"... We study the well known problem of throwing m balls into n bins. If each ball in the sequential game is allowed to select more than one bin, the maximum load of the bins can be exponentially reduced compared to the `classical balls into bins' game. We consider a static and a dynamic variant of a ra ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
We study the well known problem of throwing m balls into n bins. If each ball in the sequential game is allowed to select more than one bin, the maximum load of the bins can be exponentially reduced compared to the `classical balls into bins' game. We consider a static and a dynamic variant of a randomized parallel allocation where each ball can choose a constant number of bins. All results hold with high probability. In the static case all m balls arrive at the same time. We analyze for m = n a very simple optimal class of protocols achieving maximum load O i r q log n log log n j if r rounds of communication are allowed. This matches the lower bound of [ACMR95]. Furthermore, we generalize the protocols to the case of m ? n balls. An optimal load of O(m=n) can be achieved using log log n log(m=n) rounds of communication. Hence, for m = n log log n log log log n balls this slackness allows to hide the amount of communication. In the `classical balls into bins' game this op...
Performance Availability for Networks of Workstations
, 1999
"... Performance Availability for Networks of Workstations by Remzi H. ArpaciDusseau Software systems for largescale distributed and parallel machines are difficult to build. When run in dynamic, production environments, not only must such systems perform correctly, but they must also operate with ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
Performance Availability for Networks of Workstations by Remzi H. ArpaciDusseau Software systems for largescale distributed and parallel machines are difficult to build. When run in dynamic, production environments, not only must such systems perform correctly, but they must also operate with high performance. Much of the previous work in distributed computing has addressed the design of largescale systems that function correctly, in spite of correctness faults of individual components [18, 49, 82, 86]. However, there has been little development of techniques to tolerate performance faults  unexpected performance fluctuations from the components that comprise the system. Due to this shortcoming, many systems are overly sensitive to performance variations, in that global performance is high if and only if all system components perform exactly as expected. In this dissertation, we address this deficiency by formalizing the concept of performance availability. Our hypothesis is ...
Efficient hashing with lookups in two memory accesses, in: 16th
 SODA, ACMSIAM
"... The study of hashing is closely related to the analysis of balls and bins. Azar et. al. [1] showed that instead of using a single hash function if we randomly hash a ball into two bins and place it in the smaller of the two, then this dramatically lowers the maximum load on bins. This leads to the c ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
The study of hashing is closely related to the analysis of balls and bins. Azar et. al. [1] showed that instead of using a single hash function if we randomly hash a ball into two bins and place it in the smaller of the two, then this dramatically lowers the maximum load on bins. This leads to the concept of twoway hashing where the largest bucket contains O(log log n) balls with high probability. The hash look up will now search in both the buckets an item hashes to. Since an item may be placed in one of two buckets, we could potentially move an item after it has been initially placed to reduce maximum load. Using this fact, we present a simple, practical hashing scheme that maintains a maximum load of 2, with high probability, while achieving high memory utilization. In fact, with n buckets, even if the space for two items are preallocated per bucket, as may be desirable in hardware implementations, more than n items can be stored giving a high memory utilization. Assuming truly random hash functions, we prove the following properties for our hashing scheme. • Each lookup takes two random memory accesses, and reads at most two items per access. • Each insert takes O(log n) time and up to log log n+ O(1) moves, with high probability, and constant time in expectation. • Maintains 83.75 % memory utilization, without requiring dynamic allocation during inserts. We also analyze the tradeoff between the number of moves performed during inserts and the maximum load on a bucket. By performing at most h moves, we can maintain a maximum load of O(hlogl((~og~og:n/h)). So, even by performing one move, we achieve a better bound than by performing no moves at all. 1
Recovery time of dynamic allocation processes
 IN PROCEEDINGS OF THE 10TH ANNUAL ACM SYMPOSIUM ON PARALLEL ALGORITHMS AND ARCHITECTURES, PUERTO VALLARTA, MEXICO, 28 JUNE–2
, 1998
"... Many distributed protocols arising in applications in online load balancing and dynamic resource allocation can be modeled by dynamic allocation processes related to the “balls into bin” problems. Traditionally the main focus of the research on dynamic allocation processes is on verifying whether a ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
Many distributed protocols arising in applications in online load balancing and dynamic resource allocation can be modeled by dynamic allocation processes related to the “balls into bin” problems. Traditionally the main focus of the research on dynamic allocation processes is on verifying whether a given process is stable, and if so, on analyzing its behavior in the limit (i.e., after sufficiently many steps). Once we know that the process is stable and we know its behavior in the limit, it is natural to analyze its recovery time, which is the time needed by the process to recover from any arbitrarily bad situation and to arrive very closely to a stable (i.e., a typical) state. This investigation is important to provide assurance that even if at some stage the process has reached a highly undesirable state, we can predict with high confidence its behavior after the estimated recovery time. In this paper we present a genera / framework to study the recovery time of discretetime dynamic allocation processes. We model allocation processes by suitably chosen ergodic Markov chains. For a given Markov chain we apply path coupling arguments to bound its convergence rates to the stationary distribution, which directly yields the estimation of the recovery time of the corresponding allocation process. Our coupling approach provides in a relatively simple way an accurate prediction of the recovery time. In particular, we show that our method can be applied to significantly improve estimations of the recovery time for various allocation processes related to allocations of balls into bins, and for the edge orientation problem studied before by Ajtai et al.