Results 1  10
of
16
Hundreds of Impossibility Results for Distributed Computing
 Distributed Computing
, 2003
"... We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refe ..."
Abstract

Cited by 52 (5 self)
 Add to MetaCart
We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refer to time, space and message complexity. These results are useful in understanding the inherent difficulty of individual problems and in studying the power of different models of distributed computing.
A Theory of Competitive Analysis for Distributed Algorithms
, 1994
"... We introduce a theory of competitive analysis for distributed algorithms. The first steps in this direction were made in the seminal papers of Bartal, Fiat, and Rabani [l'?], and of Awerbuch, Kutten, and Peleg [15], in the context of data management and job scheduling. In these papers, as well ..."
Abstract

Cited by 32 (5 self)
 Add to MetaCart
(Show Context)
We introduce a theory of competitive analysis for distributed algorithms. The first steps in this direction were made in the seminal papers of Bartal, Fiat, and Rabani [l'?], and of Awerbuch, Kutten, and Peleg [15], in the context of data management and job scheduling. In these papers, as well as in other subsequent work [l4, 4, 181, the cost of a distributed algorithm as compared to the cost of an optimal globalcontrol algorithm. Here we introduce a more refined notion of competitiveness for distributed algorithms, one that reflects the performance of distributed algorithms more accurately. In particular, our theory allows one to compare the cost of a distributed online algorithm to the cost of an optimal distributed algorithm. We demonstrate our method by studying the
Modular Competitiveness for Distributed Algorithms
 In Proc. 28th ACM Symp. on Theory of Computing (STOC
, 2000
"... We define a novel measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al. [3], which measures how quickly an algorithm can finish ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
(Show Context)
We define a novel measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al. [3], which measures how quickly an algorithm can finish tasks that start at specified times. An important property of the throughput measure is that it is modular: we define a notion of relative competitiveness with the property that a krelatively competitive implementation of an object T using a subroutine U , combined with an lcompetitive implementation of U , gives a klcompetitive algorithm for ...
The DoAll Problem in Broadcast Networks
, 2001
"... The problem of performing t tasks in a distributed system on p failureprone processors is one of the fundamental problems in distributed computing. If the tasks are similar and independent and the processors communicate by sending messages then the problem is called DoAll . In our work the communi ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
The problem of performing t tasks in a distributed system on p failureprone processors is one of the fundamental problems in distributed computing. If the tasks are similar and independent and the processors communicate by sending messages then the problem is called DoAll . In our work the communication is over a multipleaccess channel, and the attached stations may fail by crashing. The measure of performance is work, defined as the number of the available processor steps. Algorithms are required to be reliable in that they perform all the tasks as long as at least one station remains operational. We show that each reliable algorithm always needs to perform at least the minimum amount t + p p t) of work. We develop an optimal deterministic algorithm for the channel with collision detection performing only the minimum work (t + p p t). Another algorithm is given for the channel without collision detection, it performs work O(t+p p t+p minff; tg), where f < p is the number of failures. It is proved to be optimal if the number of faults is the only restriction on the adversary. Finally we consider the question if randomization helps for the channel without collision detection against weaker adversaries. We develop a randomized algorithm which needs to perform only the expected minimum work if the adversary may fail a constant fraction of stations, but it has to select the failureprone stations prior to the start of an algorithm.
Collective asynchronous reading with polylogarithmic worstcase overhead
 in Proceedings, 36th ACM Symposium on Theory of Computing (STOC), 2004
"... The Collect problem for an asynchronous sharedmemory system has the objective for the processors to learn all values of a collection of shared registers, while minimizing the total number of read and write operations. First abstracted by Saks, Shavit, and Woll [37], Collect is among the standard pr ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
The Collect problem for an asynchronous sharedmemory system has the objective for the processors to learn all values of a collection of shared registers, while minimizing the total number of read and write operations. First abstracted by Saks, Shavit, and Woll [37], Collect is among the standard problems in distributed computing, The model consists of n asynchronous processes, each with a singlewriter multireader register of a polynomial capacity. The best previously known deterministic solution performs O(n 3/2 log n) reads and writes, and it is due to Ajtai, Aspnes, Dwork, and Waarts [3]. This paper presents a new deterministic algorithm that performs O(n log 7 n) read/write operations, thus substantially improving the best previous upper bound. Using an approach based on epidemic rumorspreading, the novelty of the new algorithm is in using a family of expander graphs and ensuring
Analysis of the Information Propagation Time among Mobile Hosts
 In the Proceedings of the 3rd International Conference on AdHoc Networks & Wireless (AD HOC NOW
, 2004
"... Consider k particles, 1 red and k − 1 white, chasing each other on the nodes of a graph G. If the red one catches one of the white, it “infects ” it with its color. The newly red particles are now available to infect more white ones. When is it the case that all white will become red? It turns out t ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Consider k particles, 1 red and k − 1 white, chasing each other on the nodes of a graph G. If the red one catches one of the white, it “infects ” it with its color. The newly red particles are now available to infect more white ones. When is it the case that all white will become red? It turns out that this simple question is an instance of information propagation between random walks and has important applications to mobile computing where a set of mobile hosts acts as an intermediary for the spread of information. In this paper we model this problem by k concurrent random walks, one corresponding to the red particle and k − 1 to the white ones. The infection time Tk of infecting all the white particles with red color is then a random variable that depends on k, the initial position of the particles, the number of nodes and edges of the graph, as well as on the structure of the graph. In this work we develop a set of probabilistic tools that we use to obtain upper bounds on the (worst case w.r.t. initial positions of particles) expected value of Tk for general graphs and important special cases. We easily get that an upper bound on the expected value of Tk is the worst case (over all initial positions) expected meeting time m ∗ of two random walks multiplied by Θ(log k). We demonstrate that this is, indeed, a tight bound; i.e. there is a graph G (a special case of the “lollipop ” graph), a range of values k < n (such that √ n − k = Θ ( √ n)) and an initial position of particles achieving this bound. When G is a clique or has nice expansion properties, we prove much smaller bounds for Tk. We have evaluated and validated all our results by large scale experiments which we also present and discuss here. In particular, the experiments demonstrate that our analytical results for these expander graphs are tight. Due to lack of space, an Appendix is added, to be read at the discreetion of the Program Committee members. 1
System Level Fault Diagnosis under Static, Dynamic, and Distributed Models
, 1996
"... Consider a set of n processors that can communicate with each other. Assume that each processor can be either "good " or "faulty". We wish to diagnose the system. That is, we use tests between the processors to determine the status of each processor. We suppose th ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Consider a set of n processors that can communicate with each other. Assume that each processor can be either &quot;good &quot; or &quot;faulty&quot;. We wish to diagnose the system. That is, we use tests between the processors to determine the status of each processor. We suppose that good processors are accurate, but that faulty processors may be in error. We develop fast parallel diagnosis algorithms, and also use adversary arguments to prove that our algorithms are near optimal. Our models are based upon the system diagnosis model proposed by Preparata, Metze and Chien [46]. We consider three different models of diagnosis. First we have a static model in which each processor has a fixed status, there is an upper bound t on the number of faulty processors, and we wish to minimize the number of rounds of testing used to perform diagnosis. We prove that 4 rounds are necessary and sufficient when (8=3)pn ^ t ^ 0:03n (for n sufficiently large). Furthermore, at least 5 rounds are necessary when t * 0:42n (for n sufficiently large), and 10 rounds are sufficient when t! 0:5n (for all n). It is well known that no general solution is possible when t * 0:5n. Second we consider a dynamic model in which a processor may change status during the diagnosis. In each round up to t processors may break down, and we may direct that up to t processors are repaired. We show that it is possible to limit the number of faulty processors to O(t log t), even if the system is run indefinitely. We present an adversary which shows that this bound is optimal.
When agents communicate hypotheses in critical situations
 In DALT–2006
, 2006
"... Abstract. This paper discusses the problem of efficient propagation of uncertain information in dynamic environments and critical situations. When a number of (distributed) agents have only partial access to information, the explanation(s) and conclusion(s) they can draw from their observations are ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract. This paper discusses the problem of efficient propagation of uncertain information in dynamic environments and critical situations. When a number of (distributed) agents have only partial access to information, the explanation(s) and conclusion(s) they can draw from their observations are inevitably uncertain. In this context, the efficient propagation of information is concerned with two interrelated aspects: spreading the information as quickly as possible, and refining the hypotheses at the same time. We describe a formal framework designed to investigate this class of problem, and we report on preliminary results and experiments using the described theory. 1
Compositional Competitiveness for Distributed Algorithms
, 2004
"... We define a measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al. [3], which measures how quickly an algorithm can finish tasks ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We define a measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al. [3], which measures how quickly an algorithm can finish tasks that start at specified times. The novel feature of the throughput measure, which distinguishes it from the latency measure, is that it is compositional: it supports a notion of algorithms that are competitive relative to a class of subroutines, with the property that an algorithm that is kcompetitive relative to a class of subroutines, combined with an ℓcompetitive member of that class, gives a combined algorithm that is kℓcompetitive. In particular, we prove the throughputcompetitiveness of a class of algorithms for collect operations, in which each of a group of n processes obtains all values stored in an array of n registers. Collects are a fundamental building block of a wide variety of sharedmemory distributed algorithms, and we show that several such algorithms are competitive relative to collects. Inserting a competitive collect in these algorithms gives the first examples of competitive distributed algorithms obtained by composition using a general construction. An earlier version of this work appeared as “Modular competitiveness for distributed