Results 1  10
of
57
Network coordinates in the wild
 In Proceeding of USENIX NSDI’07
, 2007
"... Network coordinates provide a mechanism for selecting and placing servers efficiently in a large distributed system. This approach works well as long as the coordinates continue to accurately reflect network topology. We conducted a longterm study of a subset of a millionplus node coordinate syste ..."
Abstract

Cited by 61 (2 self)
 Add to MetaCart
Network coordinates provide a mechanism for selecting and placing servers efficiently in a large distributed system. This approach works well as long as the coordinates continue to accurately reflect network topology. We conducted a longterm study of a subset of a millionplus node coordinate system and found that it exhibited some of the problems for which network coordinates are frequently criticized, for example, inaccuracy and fragility in the presence of violations of the triangle inequality. Fortunately, we show that several simple techniques remedy many of these problems. Using the Azureus BitTorrent network as our testbed, we show that live, largescale network coordinate systems behave differently than their tame PlanetLab and simulationbased counterparts. We find higher relative errors, more triangle inequality violations, and higher churn. We present and evaluate a number of techniques that, when applied to Azureus, efficiently produce accurate and stable network coordinates. 1
On Unbiased Sampling for Unstructured PeertoPeer Networks
 in Proc. ACM IMC
, 2006
"... This paper addresses the difficult problem of selecting representative samples of peer properties (e.g., degree, link bandwidth, number of files shared) in unstructured peertopeer systems. Due to the large size and dynamic nature of these systems, measuring the quantities of interest on every peer ..."
Abstract

Cited by 47 (6 self)
 Add to MetaCart
This paper addresses the difficult problem of selecting representative samples of peer properties (e.g., degree, link bandwidth, number of files shared) in unstructured peertopeer systems. Due to the large size and dynamic nature of these systems, measuring the quantities of interest on every peer is often prohibitively expensive, while sampling provides a natural means for estimating systemwide behavior efficiently. However, commonlyused sampling techniques for measuring peertopeer systems tend to introduce considerable bias for two reasons. First, the dynamic nature of peers can bias results towards shortlived peers, much as naively sampling flows in a router can lead to bias towards shortlived flows. Second, the heterogeneous nature of the overlay topology can lead to bias towards highdegree peers. We present a detailed examination of the ways that the behavior of peertopeer systems can introduce bias and suggest the Metropolized Random Walk with Backtracking (MRWB) as a viable and promising technique for collecting nearly unbiased samples. We conduct an extensive simulation study to demonstrate that the proposed technique works well for a wide variety of common peertopeer network conditions. Using the Gnutella network, we empirically show that our implementation of the MRWB technique yields more accurate samples than relying on commonlyused sampling techniques. Furthermore, we provide insights into the causes of the observed differences. The tool we have developed, ionsampler, selects peer addresses uniformly at random using the MRWB technique. These addresses may then be used as input to another measurement tool to collect data on a particular property.
On lifetimebased node failure and stochastic resilience of decentralized peertopeer networks
 in SIGMETRICS
, 2005
"... Abstract—To model P2P networks that are commonly faced with high rates of churn and random departure decisions by endusers, this paper investigates the resilience of random graphs to lifetimebased node failure and derives the expected delay before a user is forcefully isolated from the graph and t ..."
Abstract

Cited by 42 (9 self)
 Add to MetaCart
Abstract—To model P2P networks that are commonly faced with high rates of churn and random departure decisions by endusers, this paper investigates the resilience of random graphs to lifetimebased node failure and derives the expected delay before a user is forcefully isolated from the graph and the probability that this occurs within his/her lifetime. Using these metrics, we show that systems with heavytailed lifetime distributions are more resilient than those with lighttailed (e.g., exponential) distributions and that for a given average degree,regular graphs exhibit the highest level of fault tolerance. As a practical illustration of our results, each user in a system with aIHH billion peers, 30minute average lifetime, and 1minute nodereplacement delay can stay connected to the graph with probability I I using only 9 neighbors. This is in contrast to 37 neighbors required under previous modeling efforts. We finish the paper by observing that many P2P networks are almost surely (i.e., with probability I @IA) connected if they have no isolated nodes and derive a simple model for the probability that a P2P system partitions under churn. Index Terms—Lifetime node failure, network disconnection, peertopeer networks, stochastic resilience, user isolation. I.
Structured and unstructured overlays under the microscope  a measurementbased view of two p2p systems that people use
 In Proceedings of the USENIX Annual Technical Conference
, 2006
"... measurementbased view of two P2P systems that people use ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
measurementbased view of two P2P systems that people use
Modeling heterogeneous user churn and local resilience of unstructured p2p networks
 In ICNP
, 2006
"... Abstract — Previous analytical results on the resilience of unstructured P2P systems have not explicitly modeled heterogeneity of user churn (i.e., difference in online behavior) or the impact of indegree on system resilience. To overcome these limitations, we introduce a generic model of heterogen ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
Abstract — Previous analytical results on the resilience of unstructured P2P systems have not explicitly modeled heterogeneity of user churn (i.e., difference in online behavior) or the impact of indegree on system resilience. To overcome these limitations, we introduce a generic model of heterogeneous user churn, derive the distribution of the various metrics observed in prior experimental studies (e.g., lifetime distribution of joining users, joint distribution of session time of alive peers, and residual lifetime of a randomly selected user), derive several closedform results on the transient behavior of indegree, and eventually obtain the joint in/out degree isolation probability as a simple extension of the outdegree model in [13]. I.
An analytical study of peertopeer media streaming systems
 ACM Trans. Multimedia Comput. Commun. Appl
, 2005
"... Recent research efforts have demonstrated the great potential of building costeffective media streaming systems on top of peertopeer (P2P) networks. A P2P media streaming architecture can reach a large streaming capacity that is difficult to achieve in conventional serverbased streaming services ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
Recent research efforts have demonstrated the great potential of building costeffective media streaming systems on top of peertopeer (P2P) networks. A P2P media streaming architecture can reach a large streaming capacity that is difficult to achieve in conventional serverbased streaming services. Hybrid streaming systems that combine the use of dedicated streaming servers and P2P networks were proposed to build on the advantages of both paradigms. However, the dynamics of such systems and the impact of various factors on system behavior are not totally clear. In this paper, we present an analytical framework to quantitatively study the features of a hybrid media streaming model. Based on this framework, we derive an equation to describe the capacity growth of a singlefile streaming system. We then extend the analysis to multifile scenarios. We also show how the system achieves optimal allocation of server bandwidth among different media objects. The unpredictable departure/failure of peers is a critical factor that affects the performance of P2P systems. We utilize the concept of peer lifespan to model peer failures. The original capacity growth equation is enhanced with coefficients generated from peer lifespans that follow an exponential distribution. We also propose a failure model under arbitrarily distributed peer lifespan. Results from largescale simulations support our analysis.
Resilient peertopeer multicast without the cost
 In Proc. of MMCN
, 2005
"... We introduce Nemo, a novel peertopeer multicast protocol that achieves high delivery ratio without sacrificing endtoend latency or incurring additional costs. Based on two simple techniques: (1) coleaders to minimize dependencies and, (2) triggered negative acknowledgments (NACKs) to detect lost ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
We introduce Nemo, a novel peertopeer multicast protocol that achieves high delivery ratio without sacrificing endtoend latency or incurring additional costs. Based on two simple techniques: (1) coleaders to minimize dependencies and, (2) triggered negative acknowledgments (NACKs) to detect lost packets, Nemo’s design emphasizes conceptual simplicity and minimum dependencies, thus achieving performance characteristics capable of withstanding the natural instability of its target environment. We present an extensive comparative evaluation of our protocol through simulation and widearea experimentation. We contrast the scalability and performance of Nemo with that of three alternative protocols: Narada, Nice and NicePRM. Our results show that Nemo can achieve delivery ratios similar to those of comparable protocols under high failure rates, but at a fraction of their cost in terms of duplicate packets (reductions> 90%) and controlrelated traffic. Keywords: Resilient Multicast, PeertoPeer Multicast, Scalable Multicast 1.
On Static and Dynamic Partitioning Behavior of LargeScale P2P Networks
, 2008
"... In this paper, we analyze the problem of network disconnection in the context of largescale P2P networks and understand how both static and dynamic patterns of node failure affect the resilience of such graphs. We start by applying classical results from random graph theory to show that a large va ..."
Abstract

Cited by 10 (10 self)
 Add to MetaCart
In this paper, we analyze the problem of network disconnection in the context of largescale P2P networks and understand how both static and dynamic patterns of node failure affect the resilience of such graphs. We start by applying classical results from random graph theory to show that a large variety of deterministic and random P2P graphs almost surely (i.e., with probability 1 (1)) remain connected under random failure if and only if they have no isolated nodes. This simple, yet powerful, result subsequently allows us to derive in closedform the probability that a P2P network develops isolated nodes, and therefore partitions, under both types of node failure. We finish the paper by demonstrating that our models match simulations very well and that dynamic P2P systems are extremely resilient under node churn as long as the neighbor replacement delay is much smaller than the average user lifetime.
Dynamic layer management in superpeer architectures
 IEEE Transactions on Parallel and Distributed Systems
, 2005
"... Abstract—Superpeer unstructured P2P systems have been found to be very effective by dividing the peers into two layers, superlayer and leaflayer, in which message flooding is only conducted among superlayer and all leafpeers are represented by corresponding superpeers. However, current superpeer s ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Abstract—Superpeer unstructured P2P systems have been found to be very effective by dividing the peers into two layers, superlayer and leaflayer, in which message flooding is only conducted among superlayer and all leafpeers are represented by corresponding superpeers. However, current superpeer systems do not employ any effective layer management schemes, so the transient and lowcapacity peers are allowed to act as superpeers. Moreover, the lack of an appropriate size ratio maintenance mechanism on superlayer to leaflayer makes the system’s search performance far from being optimal. We present one workload model aimed at reducing the weighted overhead of a network. Using our proposed workload model, a network can determine an optimal layer size ratio between leaflayer and superlayer. We then propose a Dynamic Layer Management algorithm, DLM, which can maintain an optimal layer size ratio and adaptively elect and adjust peers between superlayer and leaflayer. DLM is completely distributed in the sense that each peer decides to be a superpeer or a leafpeer independently without global knowledge. DLM could effectively help a superpeer P2P system maintain the optimal layer size ratio and designate peers with relatively long lifetime and large capacities as superpeers, and the peers with short lifetime and low capacities as leafpeers under highly dynamic network situations. We demonstrate that the quality of a superpeer system is significantly improved under the DLM scheme by comprehensive simulations. Index Terms—Unstructured peertopeer, superpeer architecture, layer management, workload analysis, adaptive algorithms. 1