Results 1 - 10
of
222
Resilient Overlay Networks
, 2001
"... A Resilient Overlay Network (RON) is an architecture that allows distributed Internet applications to detect and recover from path outages and periods of degraded performance within several seconds, improving over today’s wide-area routing protocols that take at least several minutes to recover. A R ..."
Abstract
-
Cited by 1163 (34 self)
- Add to MetaCart
(Show Context)
A Resilient Overlay Network (RON) is an architecture that allows distributed Internet applications to detect and recover from path outages and periods of degraded performance within several seconds, improving over today’s wide-area routing protocols that take at least several minutes to recover. A RON is an application-layer overlay on top of the existing Internet routing substrate. The RON nodes monitor the functioning and quality of the Internet paths among themselves, and use this information to decide whether to route packets directly over the Internet or by way of other RON nodes, optimizing application-specific routing metrics. Results from two sets of measurements of a working RON deployed at sites scattered across the Internet demonstrate the benefits of our architecture. For instance, over a 64-hour sampling period in March 2001 across a twelve-node RON, there were 32 significant outages, each lasting over thirty minutes, over the 132 measured paths. RON’s routing mechanism was able to detect, recover, and route around all of them, in less than twenty seconds on average, showing that its methods for fault detection and recovery work well at discovering alternate paths in the Internet. Furthermore, RON was able to improve the loss rate, latency, or throughput perceived by data transfers; for example, about 5 % of the transfers doubled their TCP throughput and 5 % of our transfers saw their loss probability reduced by 0.05. We found that forwarding packets via at most one intermediate RON node is sufficient to overcome faults and improve performance in most cases. These improvements, particularly in the area of fault detection and recovery, demonstrate the benefits of moving some of the control over routing into the hands of end-systems.
End-to-end available bandwidth: Measurement methodology, dynamics, and relation with TCP throughput
- In Proceedings of ACM SIGCOMM
, 2002
"... The available bandwidth (avail-bw) in a network path is of major importance in congestion control, streaming applications, QoS verification, server selection, and overlay networks. We describe an end-to-end methodology, called Self-Loading Periodic Streams (SLoPS), for measuring avail-bw. The basic ..."
Abstract
-
Cited by 408 (20 self)
- Add to MetaCart
(Show Context)
The available bandwidth (avail-bw) in a network path is of major importance in congestion control, streaming applications, QoS verification, server selection, and overlay networks. We describe an end-to-end methodology, called Self-Loading Periodic Streams (SLoPS), for measuring avail-bw. The basic idea in SLoPS is that the one-way delays of a periodic packet stream show an increasing trend when the stream’s rate is higher than the avail-bw. We implemented SLoPS in a tool called pathload. The accuracy of the tool has been evaluated with both simulations and experiments over real-world Internet paths. Pathload is non-intrusive, meaning that it does not cause significant increases in the network utilization, delays, or losses. We used pathload to evaluate the variability (‘dynamics’) of the avail-bw in some paths that cross USA and Europe. The avail-bw becomes significantly more variable in heavily utilized paths, as well as in paths with limited capacity (probably due to a lower degree of statistical multiplexing). We finally examine the relation between avail-bw and TCP throughput. A persistent TCP connection can be used to roughly measure the avail-bw in a path, but TCP saturates the path, and increases significantly the path delays and jitter.
What Do Packet Dispersion Techniques Measure?
- IN PROCEEDINGS OF IEEE INFOCOM
, 2001
"... The packet pair technique estimates the capacity of a path (bottleneck bandwidth) from the dispersion (spacing) experienced by two back-to-back packets [1][2][3]. We demonstrate that the dispersion of packet pairs in loaded paths follows a multimodal distribution, and discuss the queueing effects th ..."
Abstract
-
Cited by 314 (8 self)
- Add to MetaCart
(Show Context)
The packet pair technique estimates the capacity of a path (bottleneck bandwidth) from the dispersion (spacing) experienced by two back-to-back packets [1][2][3]. We demonstrate that the dispersion of packet pairs in loaded paths follows a multimodal distribution, and discuss the queueing effects that cause the multiple modes. We show that the path capacity is often not the global mode, and so it cannot be estimated using standard statistical procedures. The effect of the size of the probing packets is also investigated, showing that the conventional wisdom of using maximum sized packet pairs is not optimal. We then study the dispersion of long packet trains. Increasing the length of the packet train reduces the measurement variance, but the estimates converge to a value, referred to as Asymptotic Dispersion Rate (ADR), that is lower than the capacity. We derive the effect of the cross traffic in the dispersion of long packet trains, showing that the ADR is not the available bandwidth in the path, as was assumed in previous work. Putting all the pieces together, we present a capacity estimation methodology that has been implemented in a tool called pathrate.
A Measurement Study of Available Bandwidth Estimation Tools
- In IMC
, 2003
"... Available bandwidth estimation is useful for route selection in overlay networks, QoS verification, and tra#c engineering. Recent years have seen a surge in interest in available bandwidth estimation. A few tools have been proposed and evaluated in simulation and over a limited number of Internet pa ..."
Abstract
-
Cited by 295 (0 self)
- Add to MetaCart
(Show Context)
Available bandwidth estimation is useful for route selection in overlay networks, QoS verification, and tra#c engineering. Recent years have seen a surge in interest in available bandwidth estimation. A few tools have been proposed and evaluated in simulation and over a limited number of Internet paths, but there is still great uncertainty in the performance of these tools over the Internet at large.
Network Topology Generators: Degree-Based vs. Structural
, 2002
"... Following the long-held belief that the Internet is hierarchical, the network topology generators most widely used by the Internet research community, Transit-Stub and Tiers, create networks with a deliberately hierarchical structure. However, in 1999 a seminal paper by Faloutsos et al. revealed tha ..."
Abstract
-
Cited by 204 (16 self)
- Add to MetaCart
(Show Context)
Following the long-held belief that the Internet is hierarchical, the network topology generators most widely used by the Internet research community, Transit-Stub and Tiers, create networks with a deliberately hierarchical structure. However, in 1999 a seminal paper by Faloutsos et al. revealed that the Internet's degree distribution is a power-law. Because the degree distributions produced by the Transit-Stub and Tiers generators are not power-laws, the research community has largely dismissed them as inadequate and proposed new network generators that attempt to generate graphs with power-law degree distributions.
Nettimer: A Tool for Measuring Bottleneck Link Bandwidth
- In Proceedings of the USENIX Symposium on Internet Technologies and Systems
, 2001
"... Measuring the bottleneck link bandwidth along a path is important for understanding the performance of many Internet applications. Existing tools to measure bottleneck bandwidth are relatively slow, can only measure bandwidth in one direction, and/or actively send probe packets. We present the netti ..."
Abstract
-
Cited by 203 (1 self)
- Add to MetaCart
(Show Context)
Measuring the bottleneck link bandwidth along a path is important for understanding the performance of many Internet applications. Existing tools to measure bottleneck bandwidth are relatively slow, can only measure bandwidth in one direction, and/or actively send probe packets. We present the nettimer bottleneck link bandwidth measurement tool, the libdpcap distributed packet capture library, and experiments quantifying their utility. We test nettimer across a variety of bottleneck network technologies ranging from 19.2Kb/s to 100Mb/s, wired and wireless, symmetric and asymmetric bandwidth, across local area and crosscountry paths, while using both one and two packet capture hosts. In most cases, nettimer has an error of less than 10%, but at worst has an error of 40%, even on cross-country paths of 17 or more hops. It converges within 10KB of the first large packet arrival while consuming less than 7% of the network traffic being measured.
Low-Rate TCP-Targeted Denial of Service Attacks
- in Proc. of ACM SIGCOMM 2003
, 2003
"... Denial of Service attacks are presenting an increasing threat to the global inter-networking infrastructure. While TCP’s congestion control algorithm is highly robust to diverse network conditions, its implicit assumption of end-system cooperation results in a wellknown vulnerability to attack by hi ..."
Abstract
-
Cited by 193 (2 self)
- Add to MetaCart
(Show Context)
Denial of Service attacks are presenting an increasing threat to the global inter-networking infrastructure. While TCP’s congestion control algorithm is highly robust to diverse network conditions, its implicit assumption of end-system cooperation results in a wellknown vulnerability to attack by high-rate non-responsive flows. In this paper, we investigate a class of low-rate denial of service attacks which, unlike high-rate attacks, are difficult for routers and counter-DoS mechanisms to detect. Using a combination of analytical modeling, simulations, and Internet experiments, we show that maliciously chosen low-rate DoS traffic patterns that exploit TCP’s retransmission time-out mechanism can throttle TCP flows to a small fraction of their ideal rate while eluding detection. Moreover, as such attacks exploit protocol homogeneity, we study fundamental limits of the ability of a class of randomized time-out mechanisms to thwart such low-rate DoS attacks.
Pathload: A Measurement Tool for End-to-End Available Bandwidth
"... The available bandwidth of a network path P is the maximum throughput that P can provide to a flow, without reducing the throughput of the cross traffic in P. We have developed an end-to-end active measurement tool, called pathload, that estimates the available bandwidth of a network path. The basic ..."
Abstract
-
Cited by 186 (6 self)
- Add to MetaCart
The available bandwidth of a network path P is the maximum throughput that P can provide to a flow, without reducing the throughput of the cross traffic in P. We have developed an end-to-end active measurement tool, called pathload, that estimates the available bandwidth of a network path. The basic idea in pathload is that the one-way delays of a periodic packet stream show increasing trend, when the stream rate is larger than the available bandwidth. In this paper, we describe pathload in detail, and show some experimental results that illustrate the tool's accuracy.
Measuring and Analyzing the Characteristics of Napster and Gnutella Hosts
, 2003
"... The popularity of peer-to-peer multimedia file sharing applications such as Gnutella and Napster has created a flurry of recent research activity into peer-to-peer architectures. We believe that the proper evaluation of a peer-to-peer system must take into account the characteristics of the peers th ..."
Abstract
-
Cited by 150 (0 self)
- Add to MetaCart
The popularity of peer-to-peer multimedia file sharing applications such as Gnutella and Napster has created a flurry of recent research activity into peer-to-peer architectures. We believe that the proper evaluation of a peer-to-peer system must take into account the characteristics of the peers that choose to participate in it. Surprisingly, however, few of the peer-to-peer architectures currently being developed are evaluated with respect to such considerations. In this paper, we remedy this situation by performing a detailed measurement study of the two popular peer-to-peer file sharing systems, namely Napster and Gnutella. In particular, our measurement study seeks to characterize the population of end-user hosts that participate in these two systems. This characterization includes the bottleneck bandwidths between these hosts and the Internet at large, IP-level latencies to send packets to these hosts, how often hosts connect and disconnect from the system, how many files hosts share and download, the degree of cooperation between the hosts, and several correlations between these characteristics. Our measurements show that there is significant heterogeneity and lack of cooperation across peers participating in these systems.
Inferring Link Loss Using Striped Unicast Probes
, 2001
"... In this paper we explore the use of end-to-end unicast traffic as measurement probes to infer link-level loss rates. We leverage off of earlier work that produced efficient estimates for link-level loss rates based on end-to-end multicast traffic measurements. We design experiments based on the noti ..."
Abstract
-
Cited by 143 (13 self)
- Add to MetaCart
(Show Context)
In this paper we explore the use of end-to-end unicast traffic as measurement probes to infer link-level loss rates. We leverage off of earlier work that produced efficient estimates for link-level loss rates based on end-to-end multicast traffic measurements. We design experiments based on the notion of transmitting stripes of packets (with no delay between transmission of successive packets within a stripe) to two or more receivers. The purpose of these stripes is to ensure that the correlation in receiver observations matches as closely as possible what would have been observed if the stripe had been replaced by a notional multicast probe that followed the same paths to the receivers. Measurements provide good evidence that a packet pair to distinct receivers introduces considerable correlation which can be further increased by simply considering longer stripes. We then use simulation to explore how well these stripes translate into accurate link-level loss estimates. We observe good accuracy with packet pairs, with a typical error of about 1%, which significantly decreases as stripe length is increased to 4 packets.