Results 1  10
of
16
Lower Bounds for Randomized Consensus under a Weak Adversary
, 2008
"... This paper studies the inherent tradeoff between termination probability and total step complexity of randomized consensus algorithms. It shows that for every integer k, the probability that an fresilient randomized consensus algorithm of n processes does not terminate with agreement within k(n − ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
This paper studies the inherent tradeoff between termination probability and total step complexity of randomized consensus algorithms. It shows that for every integer k, the probability that an fresilient randomized consensus algorithm of n processes does not terminate with agreement within k(n − f) steps is at least 1 ck, for some constant c. The lower bound holds for asynchronous systems, where processes communicate either by message passing or through shared memory, under a very weak adversary that determines the schedule in advance, without observing the algorithm’s actions. This complements algorithms of Kapron et al. [22], for messagepassing systems, and of Aumann et al. [6, 7], for sharedmemory systems.
Stabilizing consensus with the power of two choices
 In SPAA
, 2011
"... In the standard consensus problem there are n processes with possibly different input values and the goal is to eventually reach a point at which all processes commit to exactly one of these values. We are studying a slight variant of the consensus problem called the stabilizing consensus problem [2 ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
In the standard consensus problem there are n processes with possibly different input values and the goal is to eventually reach a point at which all processes commit to exactly one of these values. We are studying a slight variant of the consensus problem called the stabilizing consensus problem [2]. In this problem, we do not require that each process commits to a final value at some point, but that eventually they arrive at a common, stable value without necessarily being aware of that. This should work irrespective of the states in which the processes are starting. Our main result is a simple randomized algorithm called median rule that, with high probability, just needs O(log m log log n + log n) time and work per process to arrive at an almost stable consensus for any set of m legal values as long as an adversary can corrupt the states of at most √ n processes at any time. Without adversarial involvement, just O(log n) time and work is needed for a stable consensus, with high probability. As a byproduct, we obtain a simple distributed algorithm for approximating the median of n numbers in time O(log m log log n + log n) under adversarial presence.
Network Extractor Protocols
"... We design efficient protocols for processors to extract private randomness over a network with Byzantine faults, when each processor has access to an independent weaklyrandom nbit source of sufficient minentropy. We give several such network extractor protocols in both the information theoretic an ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
We design efficient protocols for processors to extract private randomness over a network with Byzantine faults, when each processor has access to an independent weaklyrandom nbit source of sufficient minentropy. We give several such network extractor protocols in both the information theoretic and computational settings. For a computationally unbounded adversary, we construct protocols in both the synchronous and asynchronous settings. These network extractors imply efficient protocols for leader election (synchronous setting only) and Byzantine agreement which tolerate a linear fraction of faults, even when the minentropy is only (log n)Ω(1) 2. For larger minentropy, in the synchronous setting the fraction of tolerable faults approaches the bounds in the perfectrandomness case. Our network extractors for a computationally bounded adversary work in the synchronous setting even when 99 % of the parties are faulty, assuming trapdoor permutations exist. Further, assuming a strong variant of the Decisional DiffieHellman Assumption, we construct a network extractor in which all parties receive private randomness. This yields an efficient protocol for secure multiparty computation with imperfect randomness, when the number of parties is at least polylog(n) and where the parties only have access to an independent source with minentropy n^Ω(1).
Distributed agreement with optimal communication complexity
 In Proceedings of the 21st ACMSIAM Symposium on Discrete Algorithms (SODA
, 2010
"... We consider the problem of faulttolerant agreement in a crashprone synchronous system. We present a new randomized consensus algorithm that achieves optimal communication efficiency, using only O(n) bits of communication, and terminates in (almost optimal) time O(log n), with high probability. The ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
We consider the problem of faulttolerant agreement in a crashprone synchronous system. We present a new randomized consensus algorithm that achieves optimal communication efficiency, using only O(n) bits of communication, and terminates in (almost optimal) time O(log n), with high probability. The same protocol, with minor modifications, can also be used in partially synchronous networks, guaranteeing correct behavior even in asynchronous executions, while maintaining efficient performance in synchronous executions. Finally, the same techniques also yield a randomized, faulttolerant gossip protocol that terminates in O(log ∗ n) rounds using O(n) messages (with bit complexity that depends on the data being gossiped). 1
Fast Byzantine agreement
 In PODC
, 2013
"... is paper presents the first probabilistic Byzantine Agreement algorithmwhose communication and time complexities are polylogarithmic. So far, the most effective probabilistic Byzantine Agreement algorithm had communication complexity Õ p n and time complexity O ̃ (1). Our algorithm is based on a ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
is paper presents the first probabilistic Byzantine Agreement algorithmwhose communication and time complexities are polylogarithmic. So far, the most effective probabilistic Byzantine Agreement algorithm had communication complexity Õ p n and time complexity O ̃ (1). Our algorithm is based on a novel, unbalanced, almost everywhere to everywhere Agreement protocol which is interesting in its own right.
How Efficient Can Gossip Be? (On the Cost of Resilient Information Exchange)
"... Gossip, also known as epidemic dissemination, is becoming an increasingly popular technique in distributed systems. Yet, it has remained a partially open question: how robust are such protocols? We consider a natural extension of the random phonecall model (introduced by Karp et al. [1]), and we an ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Gossip, also known as epidemic dissemination, is becoming an increasingly popular technique in distributed systems. Yet, it has remained a partially open question: how robust are such protocols? We consider a natural extension of the random phonecall model (introduced by Karp et al. [1]), and we analyze two different notions of robustness: the ability to tolerate adaptive failures, and the ability to tolerate oblivious failures. For adaptive failures, we present a new gossip protocol, TrickleGossip, which achieves nearoptimal O(n log 3 n) message complexity. To the best of our knowledge, this is the first epidemicstyle protocol that can tolerate adaptive failures. We also show a direct relation between resilience and message complexity, demonstrating that gossip protocols which tolerate a large number of adaptive failures need to use a superlinear number of messages with high probability. For oblivious failures, we present a new gossip protocol, CoordinatedGossip, that achieves optimal O(n) message complexity. This protocol makes novel use of the universe reduction technique to limit the message complexity.
The Contest Between Simplicity and Efficiency in Asynchronous Byzantine Agreement
"... In the wake of the decisive impossibility result of Fischer, Lynch, and Paterson for deterministic consensus protocols in the aynchronous model with just one failure, BenOr and Bracha demonstrated that the problem could be solved with randomness, even for Byzantine failures. Both protocols are natu ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In the wake of the decisive impossibility result of Fischer, Lynch, and Paterson for deterministic consensus protocols in the aynchronous model with just one failure, BenOr and Bracha demonstrated that the problem could be solved with randomness, even for Byzantine failures. Both protocols are natural and intuitive to verify, and Bracha’s achieves optimal resilience. However, the expected running time of these protocols is exponential in general. Recently, Kapron, Kempe, King, Saia, and Sanwalani presented the first efficient Byzantine agreement algorithm in the asynchronous, full information model, running in polylogarithmic time. Their algorithm is Monte Carlo and drastically departs from the simple structure of BenOr and Bracha’s Las Vegas algorithms. In this paper, we begin an investigation of the question: to what extent is this departure necessary? Might there be a much simpler and intuitive Las Vegas protocol that runs in expected polynomial time? We will show that the exponential running time of BenOr and Bracha’s algorithms is no mere accident of their specific details, but rather an unavoidable consequence of their general symmetry and round structure. We define a natural class of “fully symmetric round protocols ” for solving Byzantine agreement in an asynchronous setting and show that any such protocol can be forced to run in expected exponential time by an adversary in the full information model. We assume the adversary controls t Byzantine processors for t = cn, where c is an arbitrary positive constant < 1 3. We view our result as a step toward identifying the level of complexity required for a polynomialtime algorithm in this setting, and also as a guide in the search for new efficient algorithms. 1
Algorithmbased fault tolerance applied to P2P computing networks
 ap2ps, 2009 First International Conference on Advances in P2P Systems 144–149
, 2009
"... Abstract—P2P computing platforms are subject to a wide range of attacks. In this paper, we propose a generalisation of the previous diskless checkpointing approach for faulttolerance in High Performance Computing systems. Our contribution is in two directions: first, instead of restricting to 2D ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract—P2P computing platforms are subject to a wide range of attacks. In this paper, we propose a generalisation of the previous diskless checkpointing approach for faulttolerance in High Performance Computing systems. Our contribution is in two directions: first, instead of restricting to 2D checksums that tolerate only a small number of node failures, we propose to base diskless checkpointing on linear codes to tolerate potentially a large number of faults. Then, we compare and analyse the use of Low Density Parity Check (LDPC) to classical ReedSolomon (RS) codes with respect to different fault models to fit P2P systems. Our LDPC diskless checkpointing method is well suited when only node disconnections are considered, but cannot deal with byzantine peers. Our RS diskless checkpointing method tolerates such byzantine errors, but is restricted to exact finite field computations. KeywordsABFT; P2P; distributed computing; SUMMA; linear coding; faulttolerance I.
PRIVACYAWARE COLLABORATION AMONG UNTRUSTED RESOURCE CONSTRAINED DEVICES
, 2012
"... To Elizabeth and my parents. ..."