Results 1  10
of
14
Gossipbased aggregation in large dynamic networks
 ACM Trans. Comput. Syst
, 2005
"... As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block fo ..."
Abstract

Cited by 189 (36 self)
 Add to MetaCart
As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; loadbalancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossipbased protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure—all nodes receive the aggregate value continuously, thus being able to track
Hundreds of Impossibility Results for Distributed Computing
 Distributed Computing
, 2003
"... We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refe ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refer to time, space and message complexity. These results are useful in understanding the inherent difficulty of individual problems and in studying the power of different models of distributed computing.
Are WaitFree Algorithms Fast?
, 1991
"... The time complexity of waitfree algorithms in "normal" executions, where no failures occur and processes operate at approximately the same speed, is considered. A lower bound of log n on the time complexity of any waitfree algorithm that achieves approximate agreement among n processes i ..."
Abstract

Cited by 40 (11 self)
 Add to MetaCart
The time complexity of waitfree algorithms in "normal" executions, where no failures occur and processes operate at approximately the same speed, is considered. A lower bound of log n on the time complexity of any waitfree algorithm that achieves approximate agreement among n processes is proved. In contrast, there exists a nonwaitfree algorithm that solves this problem in constant time. This implies an (log n) time separation between the waitfree and nonwaitfree computation models. On the positive side, we present an O(log n) time waitfree approximate agreement algorithm; the complexity of this algorithm is within a small constant of the lower bound.
A combinatorial characterization of the distributed 1solvable tasks
 Journal of Algorithms
, 1990
"... Fischer, Lynch and Paterson showed in a fundamental paper that achieving a distributed agreement is impossible in the presence of one faulty processor. This result was later extended by Moran and Wolfstahl who showed that it holds for any task with a connected input graph and a disconnected decision ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
Fischer, Lynch and Paterson showed in a fundamental paper that achieving a distributed agreement is impossible in the presence of one faulty processor. This result was later extended by Moran and Wolfstahl who showed that it holds for any task with a connected input graph and a disconnected decision graph. In this paper we extend that latter result, and in fact we set an exact borderline between solvable and unsolvable tasks, by giving a necessary and sufficient condition for a task to be 1solvable (that is: solvable in the presence of one faulty processor). Our characterization is purely combinatorial, and involves only relations between the input graph and the output graph, defined by the given task. It provides easy proofs for the nonsolvability of tasks, and also provides a universal protocol which solves any task which is found to be solvable by our condition. Using the above characterization, we also derive a novel technique to prove lower bounds on the number of messages that must be sent due to processor failure; specifically, we provide a simple proof that for each fixed N>2 there exist distributed tasks for N processors, that can be solved in the presence of a faulty processor, but any protocol that solves them must send arbitrarily many messages in the worst case.
Computability and Complexity Results for Agreement Problems in Shared Memory Distributed Systems
, 1996
"... Agreement problems are central to the study of waitfree protocols for shared memory distributed systems. We examine two specific issues arising out of this study. We consider the complexity of the waitfree approximate agreement problem in an asynchronous shared memory comprised of only singlebit ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Agreement problems are central to the study of waitfree protocols for shared memory distributed systems. We examine two specific issues arising out of this study. We consider the complexity of the waitfree approximate agreement problem in an asynchronous shared memory comprised of only singlebit multiwriter multireader registers. For realvalued inputs of magnitude at most s and a realvalued accuracy requirement " ? 0 we show matching upper and lower bounds of \Theta(log(s=")) steps and shared registers. For inputs drawn from any fixed finite range this is significantly better than the best possible algorithm for singlewriter multireader registers, which, for n processes, requires \Omega\Gammaire n) steps. These results are used to show a separation between the waitfree singlewriter mult...
Tight bounds on the round complexity of distributed 1solvable tasks
 in Proc. 4th WDAG
, 1995
"... A distributed task T is 1solvable if there exists a protocol that solves it in the presence of (at most) one crash failure. A precise characterization of the 1solvable tasks was given in [BMZ]. In this paper we determine the number of rounds of communication that are required, in the worst case, b ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
A distributed task T is 1solvable if there exists a protocol that solves it in the presence of (at most) one crash failure. A precise characterization of the 1solvable tasks was given in [BMZ]. In this paper we determine the number of rounds of communication that are required, in the worst case, by a protocol which 1solves a given 1solvable task T for n processors. We define the radius R (T) ofT, and show that if R (T) is finite, then the number of rounds is Θ(log n R (T)); more precisely, we give a lower bound of log (n −1)R (T), and an upper bound of 2+ � log (n −1)R (T) �. The upper bound implies, for example, that each of the following tasks: renaming, order preserving renaming ([ABDKPR]) and binary monotone consensus ([BMZ]) can be solved in the presence of one fault in 3 rounds of communications. All previous protocols that 1solved these tasks required Ω(n) rounds. The result is also generalized to tasks whose radii are not bounded, e.g., the approximate consensus and its variants ([DLPSW, BMZ]).
Optimal Resilience Asynchronous Approximate Agreement
"... Abstract. Consider an asynchronous system where each process begins with an arbitrary real value. Given some fixed ɛ>0, an approximate agreement algorithm must have all nonfaulty processes decide on values that are at most ɛ from each other and are in the range of the initial values of the nonf ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. Consider an asynchronous system where each process begins with an arbitrary real value. Given some fixed ɛ>0, an approximate agreement algorithm must have all nonfaulty processes decide on values that are at most ɛ from each other and are in the range of the initial values of the nonfaulty processes. Previous constructions solved asynchronous approximate agreement only when there were at least 5t + 1 processes, t of which may be Byzantine. In this paper we close an open problem raised by Dolev et al. in 1983. We present a deterministic optimal resilience approximate agreement algorithm that can tolerate any t Byzantine faults while requiring only 3t + 1 processes. The algorithm’s rate of convergence and total message complexity are efficiently bounded as a function of the range of the initial values of the nonfaulty processes. All previous asynchronous algorithms that are resilient to Byzantine failures may require arbitrarily many messages to be sent.
Approximate Agreement with Mixed Mode Faults: Algorithm and Lower Bound
 In Distributed Computing, 12th International Symposium, volume 1499 of LNCS
, 1998
"... . Approximate agreement is a building block for faulttolerant distributed systems. It is a formalisation for the basic operation of choosing a single real value (representing say speed) for use in later computation, reflecting the different approximations to this value reported from a number of pos ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
. Approximate agreement is a building block for faulttolerant distributed systems. It is a formalisation for the basic operation of choosing a single real value (representing say speed) for use in later computation, reflecting the different approximations to this value reported from a number of possiblyfaulty processors or sensors. We study the approximate agreement problem in distributed systems where processor failures are characterised depending on their severity. We develope a new algorithm that can tolerate up to b byzantine faults, s symmetric ones, and o sendomission faults. We analyse the convergence attained by this algorithm, and also give a universal bound on the convergence available to any algorithm no matter how complicated. 1 Introduction Faulttolerance is an important property for distributed systems. Distribution makes faulttolerance possible, becausse if some processors fail, there are others which can continue with the computation. Distribution also makes faul...