Results 1 - 10
of
307
A Case for End System Multicast
- in Proceedings of ACM Sigmetrics
, 2000
"... Abstract — The conventional wisdom has been that IP is the natural protocol layer for implementing multicast related functionality. However, more than a decade after its initial proposal, IP Multicast is still plagued with concerns pertaining to scalability, network management, deployment and suppor ..."
Abstract
-
Cited by 1290 (24 self)
- Add to MetaCart
Abstract — The conventional wisdom has been that IP is the natural protocol layer for implementing multicast related functionality. However, more than a decade after its initial proposal, IP Multicast is still plagued with concerns pertaining to scalability, network management, deployment and support for higher layer functionality such as error, flow and congestion control. In this paper, we explore an alternative architecture that we term End System Multicast, where end systems implement all multicast related functionality including membership management and packet replication. This shifting of multicast support from routers to end systems has the potential to address most problems associated with IP Multicast. However, the key concern is the performance penalty associated with such a model. In particular, End System Multicast introduces duplicate packets on physical links and incurs larger end-to-end delays than IP Multicast. In this paper, we study these performance concerns in the context of the Narada protocol. In Narada, end systems selforganize into an overlay structure using a fully distributed protocol. Further, end systems attempt to optimize the efficiency of the overlay by adapting to network dynamics and by considering application level performance. We present details of Narada and evaluate it using both simulation and Internet experiments. Our results indicate that the performance penalties are low both from the application and the network perspectives. We believe the potential benefits of transferring multicast functionality from end systems to routers significantly outweigh the performance penalty incurred. I.
Resilient Overlay Networks
, 2001
"... A Resilient Overlay Network (RON) is an architecture that allows distributed Internet applications to detect and recover from path outages and periods of degraded performance within several seconds, improving over today’s wide-area routing protocols that take at least several minutes to recover. A R ..."
Abstract
-
Cited by 1160 (31 self)
- Add to MetaCart
(Show Context)
A Resilient Overlay Network (RON) is an architecture that allows distributed Internet applications to detect and recover from path outages and periods of degraded performance within several seconds, improving over today’s wide-area routing protocols that take at least several minutes to recover. A RON is an application-layer overlay on top of the existing Internet routing substrate. The RON nodes monitor the functioning and quality of the Internet paths among themselves, and use this information to decide whether to route packets directly over the Internet or by way of other RON nodes, optimizing application-specific routing metrics. Results from two sets of measurements of a working RON deployed at sites scattered across the Internet demonstrate the benefits of our architecture. For instance, over a 64-hour sampling period in March 2001 across a twelve-node RON, there were 32 significant outages, each lasting over thirty minutes, over the 132 measured paths. RON’s routing mechanism was able to detect, recover, and route around all of them, in less than twenty seconds on average, showing that its methods for fault detection and recovery work well at discovering alternate paths in the Internet. Furthermore, RON was able to improve the loss rate, latency, or throughput perceived by data transfers; for example, about 5 % of the transfers doubled their TCP throughput and 5 % of our transfers saw their loss probability reduced by 0.05. We found that forwarding packets via at most one intermediate RON node is sufficient to overcome faults and improve performance in most cases. These improvements, particularly in the area of fault detection and recovery, demonstrate the benefits of moving some of the control over routing into the hands of end-systems.
A blueprint for introducing disruptive technology into the internet
, 2002
"... This paper argues that a new class of geographically distributed network services is emerging, and that the most effective way to design, evaluate, and deploy these services is by using an overlay-based testbed. Unlike conventional network testbeds, however, we advocate an approach that supports bot ..."
Abstract
-
Cited by 593 (43 self)
- Add to MetaCart
This paper argues that a new class of geographically distributed network services is emerging, and that the most effective way to design, evaluate, and deploy these services is by using an overlay-based testbed. Unlike conventional network testbeds, however, we advocate an approach that supports both researchers that want to develop new services, and clients that want to use them. This dual use, in turn, suggests four design principles that are not widely supported in existing testbeds: services should be able to run continuously and access a slice of the overlay’s resources, control over resources should be distributed, overlay management services should be unbundled and run in their own slices, and APIs should be designed to promote application development. We believe a testbed that supports these design principles will facilitate the emergence of a new service-oriented network architecture. Towards this end, the paper also briefly describes PlanetLab, an overlay network being designed with these four principles in mind.
On Inferring Autonomous System Relationships in the Internet
- IEEE/ACM Transactions on Networking
, 2000
"... ..."
(Show Context)
Enabling Conferencing Applications on the Internet using an Overlay Multicast Architecture
- In Proceedings of ACM SIGCOMM
, 2001
"... ..."
(Show Context)
Informed Content Delivery Across Adaptive Overlay Networks
, 2002
"... Overlay networks have emerged as a powerful and highly flexible method for delivering content. We study how to optimize through-put of large, multipoint transfers across richly connected overlay networks, focusing on the question of what to put in each transmit-ted packet. We first make the case for ..."
Abstract
-
Cited by 247 (8 self)
- Add to MetaCart
(Show Context)
Overlay networks have emerged as a powerful and highly flexible method for delivering content. We study how to optimize through-put of large, multipoint transfers across richly connected overlay networks, focusing on the question of what to put in each transmit-ted packet. We first make the case for transmitting encoded content in this scenario, arguing for the digital fountain approach which en-ables end-hosts to efficiently restitute the original content of size n from a subset of any n symbols from a large universe of encoded symbols. Such an approach affords reliability and a substantial de-gree of application-level flexibility, as it seamlessly tolerates packet loss, connection migration, and parallel transfers. However, since the sets of symbols acquired by peers are likely to overlap substan-tially, care must be taken to enable them to collaborate effectively. We provide a collection of useful algorithmic tools for efficient es-timation, summarization, and approximate reconciliation of sets of symbols between pairs of collaborating peers, all of which keep messaging complexity and computation to a minimum. Through simulations and experiments on a prototype implementation, we demonstrate the performance benefits of our informed content de-livery mechanisms and how they complement existing overlay net-work architectures.
Network Topology Generators: Degree-Based vs. Structural
, 2002
"... Following the long-held belief that the Internet is hierarchical, the network topology generators most widely used by the Internet research community, Transit-Stub and Tiers, create networks with a deliberately hierarchical structure. However, in 1999 a seminal paper by Faloutsos et al. revealed tha ..."
Abstract
-
Cited by 207 (17 self)
- Add to MetaCart
(Show Context)
Following the long-held belief that the Internet is hierarchical, the network topology generators most widely used by the Internet research community, Transit-Stub and Tiers, create networks with a deliberately hierarchical structure. However, in 1999 a seminal paper by Faloutsos et al. revealed that the Internet's degree distribution is a power-law. Because the degree distributions produced by the Transit-Stub and Tiers generators are not power-laws, the research community has largely dismissed them as inadequate and proposed new network generators that attempt to generate graphs with power-law degree distributions.
Towards an Accurate AS-Level Traceroute Tool
, 2003
"... Traceroute is widely used to detect routing problems, characterize end-to-end paths, and discover the Internet topology. Providing an accurate list of the Autonomous Systems (ASes) along the forwarding path would make traceroute even more valuable to researchers and network operators. However, conve ..."
Abstract
-
Cited by 193 (19 self)
- Add to MetaCart
Traceroute is widely used to detect routing problems, characterize end-to-end paths, and discover the Internet topology. Providing an accurate list of the Autonomous Systems (ASes) along the forwarding path would make traceroute even more valuable to researchers and network operators. However, conventional approaches to mapping traceroute hops to AS numbers are not accurate enough. Address registries are often incomplete and out-of-date. BGP routing tables provide a better IP-to-AS mapping, though this approach has significant limitations as well. Based on our extensive measurements, about 10% of the traceroute paths have one or more hops that do not map to a unique AS number, and around 15% of the traceroute AS paths have an AS loop. In addition, some traceroute AS paths have extra or missing AS hops due to Internet eXchange Points, sibling ASes managed by the same institution, and ASes that do not advertise routes to their infrastructure. Using the BGP tables as a starting point, we propose techniques for improving the IP-to-AS mapping as an important step toward an AS-level traceroute tool. Our algorithms draw on analysis of traceroute probes, reverse DNS lookups, BGP routing tables, and BGP update messages collected from multiple locations. We also discuss how the improved IP-to-AS mapping allows us to home in on cases where the BGP and traceroute AS paths differ for legitimate reasons.
Meridian: A Lightweight Network Location Service without Virtual Coordinates
- In SIGCOMM
, 2005
"... This paper introduces a lightweight, scalable and accurate framework, called Meridian, for performing node selection based on network location. The framework consists of an overlay network structured around multi-resolution rings, query routing with direct measurements, and gossip protocols for diss ..."
Abstract
-
Cited by 190 (8 self)
- Add to MetaCart
This paper introduces a lightweight, scalable and accurate framework, called Meridian, for performing node selection based on network location. The framework consists of an overlay network structured around multi-resolution rings, query routing with direct measurements, and gossip protocols for dissemination. We show how this framework can be used to address three commonly encountered problems, namely, closest node discovery, central leader election, and locating nodes that satisfy target latency constraints in large-scale distributed systems without having to compute absolute coordinates. We show analytically that the framework is scalable with logarithmic convergence when Internet latencies are modeled as a growthconstrained metric, a low-dimensional Euclidean metric, or a metric of low doubling dimension. Large scale simulations, based on latency measurements from 6.25 million node-pairs as well as an implementation deployed on PlanetLab show that the framework is accurate and effective.
Improving the reliability of Internet paths with one-hop source routing
- In OSDI
, 2004
"... Recent work has focused on increasing availability in the face of Internet path failures. To date, proposed solutions have relied on complex routing and pathmonitoring schemes, trading scalability for availability among a relatively small set of hosts. This paper proposes a simple, scalable approach ..."
Abstract
-
Cited by 175 (9 self)
- Add to MetaCart
(Show Context)
Recent work has focused on increasing availability in the face of Internet path failures. To date, proposed solutions have relied on complex routing and pathmonitoring schemes, trading scalability for availability among a relatively small set of hosts. This paper proposes a simple, scalable approach to recover from Internet path failures. Our contributions are threefold. First, we conduct a broad measurement study of Internet path failures on a collection of 3,153 Internet destinations consisting of popular Web servers, broadband hosts, and randomly selected nodes. We monitored these destinations from 67 PlanetLab vantage points over a period of seven days, and found availabilities ranging from 99.6 % for servers to 94.4 % for broadband hosts. When failures do occur, many appear too close to the destination (e.g., last-hop and end-host failures) to be mitigated through alternative routing techniques of any kind. Second, we show that for the failures that can be addressed through routing, a simple, scalable technique, called one-hop source routing, can achieve close to the maximum benefit available with very low overhead. When a path failure occurs, our scheme attempts to recover from it by routing indirectly through a small set of randomly chosen intermediaries. Third, we implemented and deployed a prototype onehop source routing infrastructure on PlanetLab. Over a three day period, we repeatedly fetched documents from 982 popular Internet Web servers and used one-hop source routing to attempt to route around the failures we observed. Our results show that our prototype successfully recovered from 56 % of network failures. However, we also found a large number of server failures that cannot be addressed through alternative routing. Our research demonstrates that one-hop source routing is easy to implement, adds negligible overhead, and achieves close to the maximum benefit available to indirect routing schemes, without the need for path monitoring, history, or a-priori knowledge of any kind. 1