Results 1 -
2 of
2
Self-organized fault-tolerant routing in peer-to-peer overlays
"... Abstract—In sufficiently large heterogeneous overlays message loss and delays are likely to occur. This has a significant impact on overlay routing, especially on longer paths. The existing solutions to this problem rely on message redundancy to mask the loss and delays. This incurs a significant ba ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
Abstract—In sufficiently large heterogeneous overlays message loss and delays are likely to occur. This has a significant impact on overlay routing, especially on longer paths. The existing solutions to this problem rely on message redundancy to mask the loss and delays. This incurs a significant bandwidth cost. We propose the Forward Feedback Protocol (FFP) which only routes a single copy of the message and detects the message loss and excessive delays while routing. Failures are signalled along the routing paths. Based only on the simple binary signals, each overlay node locally and independently learns to route to avoid failures. The local node interactions lead to the emergence of fast reliable overlay routes. This is a continuous process, the system constantly self-organizes in response to changing delay and loss conditions. We evaluate the protocol in the Internet deployment and in simulation. Our system uses 2-5 times less bandwidth than the existing overlay routing approaches that rely on high message redundancy for fault-tolerance. Despite its marginal bandwidth investment in reliability, FFP achieves up to a 30 % higher delivery success rate in comparison to the existing solutions. The protocol is scalable with local state size of O(log 2 N) in terms of the network size and is universally applicable to all recursively routing overlays. I.
Handling very large . . .
"... The principal service of Distributed Hash Tables (DHTs) is route(id, data), which sends data to a peer responsible for id, using typically O(log( # of peers)) overlay hops. Certain applications like peer-to-peer information retrieval generate billions of small messages that are concurrently inserted ..."
Abstract
- Add to MetaCart
The principal service of Distributed Hash Tables (DHTs) is route(id, data), which sends data to a peer responsible for id, using typically O(log( # of peers)) overlay hops. Certain applications like peer-to-peer information retrieval generate billions of small messages that are concurrently inserted into a DHT. These applications can generate messages faster than the DHT can process them. To support such demanding applications, a DHT needs a congestion control mechanism to efficiently handle high loads of messages. In this paper we provide an extended study on congestion control for DHTs: we present a theoretical analysis that demonstrates that congestion control for DHTs is absolutely necessary for applications that provide elastic traffic. We then present a new congestion control algorithm for DHTs. We provide extensive live evaluations in a ModelNet cluster and the PlanetLab test bed, which show that our algorithm is nearly loss-free, fair, and provides low lookup times and high throughput under cross-load.