Results 1 - 10
of
17
Antfarm: Efficient Content Distribution with Managed Swarms
"... This paper describes Antfarm, a content distribution system based on managed swarms. A managed swarm couples peer-to-peer data exchange with a coordinator that directs bandwidth allocation at each peer. Antfarm achieves high throughput by viewing content distribution as a global optimization problem ..."
Abstract
-
Cited by 55 (1 self)
- Add to MetaCart
(Show Context)
This paper describes Antfarm, a content distribution system based on managed swarms. A managed swarm couples peer-to-peer data exchange with a coordinator that directs bandwidth allocation at each peer. Antfarm achieves high throughput by viewing content distribution as a global optimization problem, where the goal is to minimize download latencies for participants subject to bandwidth constraints and swarm dynamics. The system is based on a wire protocol that enables the Antfarm coordinator to gather information on swarm dynamics, detect misbehaving hosts, and direct the peers ’ allotment of upload bandwidth among multiple swarms. Antfarm’s coordinator grants autonomy and local optimization opportunities to participating nodes while guiding the swarms toward an efficient allocation of resources. Extensive simulations and a PlanetLab deployment show that the system can significantly outperform centralized distribution services as well as swarming systems such as BitTorrent. 1
Is content publishing in BitTorrent altruistic or profit-driven?
, 2010
"... BitTorrent is the most popular P2P content delivery application where individual users share various type of content with tens of thousands of other users. The growing popularity of BitTorrent is primarily due to the availability of valuable content without any cost for the consumers. However, apart ..."
Abstract
-
Cited by 25 (3 self)
- Add to MetaCart
BitTorrent is the most popular P2P content delivery application where individual users share various type of content with tens of thousands of other users. The growing popularity of BitTorrent is primarily due to the availability of valuable content without any cost for the consumers. However, apart from required resources, publishing (sharing) valuable (and often copyrighted) content has serious legal implications for users who publish the material (or publishers). This raises a question that whether (at least major) content publishers behave in an altruistic fashion or have other incentives such as financial. In this study, we identify the content publishers of more than 55K torrents in two major BitTorrent portals and examine their behavior. We demonstrate that a small fraction of publishers is responsible for 67 % of the published content and 75 % of the downloads. Our investigations reveal that these major publishers respond to two different profiles. On the one hand, antipiracy agencies and malicious publishers publish a large amount of fake files to protect copyrighted content and spread malware respectively. On the other hand, content publishing in BitTorrent is largely driven by companies with financial incentives. Therefore, if these companies lose their interest or are unable to publish content, BitTorrent traffic/portals may disappear or at least their associated traffic will be significantly reduced.
Applications
"... This paper proposes a new delivery-centric abstraction, which extends the existing content-centric interface. Specifically, a delivery-centric interface allows applications to generate content requests agnostic to location or protocol, with the additional ability to stipulate high-level requirements ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
(Show Context)
This paper proposes a new delivery-centric abstraction, which extends the existing content-centric interface. Specifically, a delivery-centric interface allows applications to generate content requests agnostic to location or protocol, with the additional ability to stipulate high-level requirements (e.g. performance, encryption support). Fulfilling these requirements, however, is complex as often the ability of a provider to satisfy requirements will vary between different consumers and over time. To address this, we also present a middleware, Juno, which is capable of re-configuring between the use of different content protocols and infrastructure. Juno exploits this to make request-time decisions on how best to access content, thus liberating developers from making design-time decisions.
A Juno: A Middleware Platform for Supporting Delivery-Centric Applications
"... This paper proposes a new delivery-centric abstraction, which allows applications to generate content requests agnostic to location or protocol, with the additional ability to stipulate high-level requirements (e.g. performance, security). A delivery-centric system should therefore adapt to fulfil t ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
(Show Context)
This paper proposes a new delivery-centric abstraction, which allows applications to generate content requests agnostic to location or protocol, with the additional ability to stipulate high-level requirements (e.g. performance, security). A delivery-centric system should therefore adapt to fulfil these requirements. This, however, is complex as often the ability of a provider to satisfy requirements will vary both between different consumers and over time. To address this, we present a middleware called Juno, which implements the proposed abstraction by using a (re-)configurable software architecture to dynamically model, select and interact with optimal content sources based on any application requirements.
Characterizing the File Hosting Ecosystem: A View from the Edge
, 2011
"... We present a comprehensive, longitudinal characterization study of the file hosting ecosystem using HTTP traces collected from a large campus network over a one-year period. We performed detailed multi-level analysis of the usage behaviour, infrastructure properties, content characteristics, and use ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
We present a comprehensive, longitudinal characterization study of the file hosting ecosystem using HTTP traces collected from a large campus network over a one-year period. We performed detailed multi-level analysis of the usage behaviour, infrastructure properties, content characteristics, and user-perceived performance of the top five services in terms of traffic volume, namely RapidShare, Megaupload, zSHARE, MediaFire, and Hotfile. We carefully devised methods to identify user clickstreams in the HTTP traces, including the identification of free and premium user instances, as well as the identification of content that is split into multiple pieces and downloaded using multiple transactions. Throughout this characterization, we compare and contrast these services with each other as well as with peer-to-peer file sharing and other media sharing services.
Paper An Incentives Model Based on Reputation for P2P Systems
"... Abstract—In this paper an incentive model to improve the col-laboration in peer-to-peer networks is introduced. The pro-posed solution uses an incentives model associated with rep-utation issues as a way to improve the performance of a P2P system. The reputation of the all peers in the system is bas ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—In this paper an incentive model to improve the col-laboration in peer-to-peer networks is introduced. The pro-posed solution uses an incentives model associated with rep-utation issues as a way to improve the performance of a P2P system. The reputation of the all peers in the system is based on their donated resources and on their behavior. Supplying peers use these rules as a way to assign its outgoing band-width to the requesting peers during a content distribution. Each peer can build its best paths by using a best-neighbor policy within its neighborhood. A peer can use its best paths to obtain best services related to content search or download. The obtained results show that proposed scheme insulates the misbehaving peers and reduces the free-riding so that the sys-tems performance is maximized. Keywords—content distribution, incentive model, reputation, peer-to-peer networks. 1.
A Task-Based Model for the Lifespan of Peer-to-Peer Swarms
"... Abstract. Peer-to-Peer (P2P) techniques are broadly adopted in modern applications such as Xunlei and Private Tracker [1, 2]. To address the problem of service availability, techniques such as bundling and implicit uploading are suggested to increase the swarm lifespan, i.e., the duration between th ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. Peer-to-Peer (P2P) techniques are broadly adopted in modern applications such as Xunlei and Private Tracker [1, 2]. To address the problem of service availability, techniques such as bundling and implicit uploading are suggested to increase the swarm lifespan, i.e., the duration between the birth and the death of a swarm, by motivating or even forcing peers to make more contributions. In these systems, it is common for a peer to join a swarm repeatedly, which can introduce substantial bias for lifespan modeling and prediction. In this paper, we present a mathematical model to study the lifespan of a P2P swarming system in the presence of multi-participation. We perform evaluations on three traces and a well-known simulator. The result demonstrates that our model is more accurate than previous ones.
ISP-friendly Peer-assisted On-demand Streaming of Long Duration Content in BBC iPlayer
"... Abstract—In search of scalable solutions, CDNs are exploring P2P support. However, the benefits of peer assistance can be limited by various obstacle factors such as ISP friendliness— requiring peers to be within the same ISP, bitrate stratification— the need to match peers with others needing simil ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—In search of scalable solutions, CDNs are exploring P2P support. However, the benefits of peer assistance can be limited by various obstacle factors such as ISP friendliness— requiring peers to be within the same ISP, bitrate stratification— the need to match peers with others needing similar bitrate, and partial participation—some peers choosing not to redistribute content. This work relates potential gains from peer assistance to the average number of users in a swarm, its capacity, and empirically studies the effects of these obstacle factors at scale, using a month-long trace of over 2 million users in London accessing BBC shows online. Results indicate that even when P2P swarms are localised within ISPs, up to 88 % of traffic can be saved. Surprisingly, bitrate stratification results in 2 large sub-swarms and does not significantly affect savings. However, partial participation, and the need for a minimum swarm size do affect gains. We investigate improvements to gain from increasing content avail-ability through two well-studied techniques: content bundling– combining multiple items to increase availability, and historical caching of previously watched items. Bundling proves ineffective as increased server traffic from larger bundles outweighs benefits of availability, but simple caching can considerably boost traffic gains from peer assistance. I.
A Collaborative P2P Scheme for NAT Traversal Server Discovery based on Topological Information
"... In the current Internet picture more than 70 % of the hosts are located behind Network Address Translators (NATs). This is not a problem in the client/server paradigm. However, the Internet has evolved and nowadays the major portion of the traffic is due to peer-to-peer (p2p) applications, and in su ..."
Abstract
- Add to MetaCart
(Show Context)
In the current Internet picture more than 70 % of the hosts are located behind Network Address Translators (NATs). This is not a problem in the client/server paradigm. However, the Internet has evolved and nowadays the major portion of the traffic is due to peer-to-peer (p2p) applications, and in such scenario, two hosts behind NATs (NATed hosts) cannot establish direct communications. The easiest way to solve this problem is by using a third entity, called Relay, that forwards the traffic between the NATed hosts. Although many efforts have been devoted to avoid the use of Relays, they are still needed in many situations. Hence, the selection of a suitable Relay becomes critical to many p2p applications. In this paper we propose the Gradual Proximity Algorithm (GPA): a simple algorithm that guarantees the se-lection of the topologically closest Relay. We present a measurement-based analysis showing that the GPA minimizes both the delay of the relayed communication and the transit traffic generated by the Relay, being a QoS-aware and ISP-friendly solu-tion. Furthermore, the paper presents the Peer-to-Peer NAT Traversal Architecture
∗University Carlos III Madrid †Institute IMDEA Networks
"... Abstract—BitTorrent is the most successful peer-to-peer ap-plication. In the last years the research community has studied the BitTorrent ecosystem by collecting data from real BitTorrent swarms using different measurement techniques. In this paper we present the first survey of these techniques tha ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—BitTorrent is the most successful peer-to-peer ap-plication. In the last years the research community has studied the BitTorrent ecosystem by collecting data from real BitTorrent swarms using different measurement techniques. In this paper we present the first survey of these techniques that constitutes a first step in the design of future measurement techniques and tools for analyzing large scale systems. The techniques are classified into Macroscopic, Microscopic and Complementary. Macroscopic techniques allow to collect aggregated information of torrents and present a very high scalability being able to monitor up to hundreds of thousands of torrents in short periods of time. Rather, Microscopic techniques operate at the peer level and focus on understanding performance aspects such as the peers’ download rates. They offer a higher granularity but do not scale as well as the Macroscopic techniques. Finally, Complementary techniques utilize recent extensions to the BitTorrent protocol in order to obtain both aggregated and peer level information. The paper also summarizes the main challenges faced by the research community to accurately measure the BitTorrent ecosystem such as accurately identifying peers or estimating peers ’ upload rates. Furthermore, we provide possible solutions to address the described challenges. I.