Results 1 - 10
of
218
Congestion control for high bandwidth-delay product networks
- SIGCOMM '02
, 2002
"... Theory and experiments show that as the per-flow product of bandwidth and latency increases, TCP becomes inefficient and prone to instability, regardless of the queuing scheme. This failing becomes increasingly important as the Internet evolves to incorporate very high-bandwidth optical links and mo ..."
Abstract
-
Cited by 454 (4 self)
- Add to MetaCart
(Show Context)
Theory and experiments show that as the per-flow product of bandwidth and latency increases, TCP becomes inefficient and prone to instability, regardless of the queuing scheme. This failing becomes increasingly important as the Internet evolves to incorporate very high-bandwidth optical links and more large-delay satellite links. To address this problem, we develop a novel approach to Internet congestion control that outperforms TCP in conventional environments, and remains efficient, fair, scalable, and stable as the bandwidth-delay product increases. This new eXplicit Control Protocol, XCP, generalizes the Explicit Congestion Notification proposal (ECN). In addition, XCP introduces the new concept of decoupling utilization control from fairness control. This allows a more flexible and analytically tractable protocol design and opens new avenues for service differentiation. Using a control theory framework, we model XCP and demonstrate it is stable and efficient regardless of the link capacity, the round trip delay, and the number of sources. Extensive packet-level simulations show that XCP outperforms TCP in both conventional and high bandwidth-delay environments. Further, XCP achieves fair bandwidth allocation, high utilization, small standing queue size, and near-zero packet drops, with both steady and highly varying traffic. Additionally, the new protocol does not maintain any per-flow state in routers and requires few CPU cycles per packet, which makes it implementable in high-speed routers.
A Duality Model of TCP and Queue Management Algorithms
- IEEE/ACM Trans. on Networking
, 2002
"... We propose a duality model of congestion control and apply it to understand the equilibrium properties of TCP and active queue management schemes. Congestion control is the interaction of source rates with certain congestion measures at network links. The basic idea is to regard source rates as p ..."
Abstract
-
Cited by 307 (37 self)
- Add to MetaCart
We propose a duality model of congestion control and apply it to understand the equilibrium properties of TCP and active queue management schemes. Congestion control is the interaction of source rates with certain congestion measures at network links. The basic idea is to regard source rates as primal variables and congestion measures as dual variables, and congestion control as a distributed primal-dual algorithm carried out over the Internet to maximize aggregate utility subject to capacity constraints. The primal iteration is carried out by TCP algorithms such as Reno or Vegas, and the dual iteration is carried out by queue management such as DropTail, RED or REM. We present these algorithms and their generalizations, derive their utility functions, and study their interaction.
Internet Congestion Control for Future High Bandwidth-Delay Product Environments
- ACM SIGCOMM
, 2002
"... Theory and experiments show that as the per-flow product of bandwidth and latency increases, TCP becomes inefficient and prone to instability, regardless of the queuing scheme. This failing becomes increasingly important as the Internet evolves to incorporate very high-bandwidth optical links and mo ..."
Abstract
-
Cited by 130 (0 self)
- Add to MetaCart
(Show Context)
Theory and experiments show that as the per-flow product of bandwidth and latency increases, TCP becomes inefficient and prone to instability, regardless of the queuing scheme. This failing becomes increasingly important as the Internet evolves to incorporate very high-bandwidth optical links and more large-delay satellite links. To address
TCP Nice: A Mechanism for Background Transfers
, 2002
"... background transfers transfers of data that humans are not waiting for to improve availability, reliability, latency or consistency. However, given the rapid fluctuations of available network bandwidth and changing resource costs due to technology trends, hand tuning the aggressiveness of background ..."
Abstract
-
Cited by 120 (12 self)
- Add to MetaCart
background transfers transfers of data that humans are not waiting for to improve availability, reliability, latency or consistency. However, given the rapid fluctuations of available network bandwidth and changing resource costs due to technology trends, hand tuning the aggressiveness of background transfers risks (1) complicating applications, (2) being too aggressive and interfering with other applications, and (3) being too timid and not gaining the benefits of background transfers. Our goal is for the operating system to manage network resources in order to provide a simple abstraction of near zero-cost background transfers. Our system, TCP Nice, can provably bound the interference inflicted by background flows on foreground flows in a restricted network model. And our microbenchmarks and case study applications suggest that in practice it interferes little with foreground flows, reaps a large fraction of spare network bandwidth, and simplifies application construction and deployment. For example, in our prefetching case study application, aggressive prefetching improves demand performance by a factor of three when Nice manages resources; but the same prefetching hurts demand performance by a factor of six under standard network congestion control.
Dynamic Behavior of Slowly-Responsive Congestion Control Algorithm
- In Proceedings of ACM SIGCOMM 2001
, 2001
"... Abstract The recently developed notion of TCP-compatibility has led to a number of proposals for alternative congestion control algorithms whose long-term throughput as a function of a steady-state loss rate is similar to that of TCP. Motivated by the needs of some streaming and multicast applicati ..."
Abstract
-
Cited by 103 (10 self)
- Add to MetaCart
Abstract The recently developed notion of TCP-compatibility has led to a number of proposals for alternative congestion control algorithms whose long-term throughput as a function of a steady-state loss rate is similar to that of TCP. Motivated by the needs of some streaming and multicast applications, these algorithms seem poised to take the current TCP-dominated Internet to an Internet where many congestion control algorithms co-exist. An important characteristic of these alternative algorithms is that they are slowly-responsive, refraining from reacting as drastically as TCP to a single packet loss. However, the TCP-compatibility criteria explored so far in the literature considers only the static condition of a fixed loss rate. This paper investigates the behavior of slowly-responsive, TCPcompatible congestion control algorithms under more realistic dynamic network conditions, addressing the fundamental question of whether these algorithms are safe to deploy in the public Internet. We study persistent loss rates, long-and short-term fairness properties, bottleneck link utilization, and smoothness of transmission rates.
Packet Loss Recovery for Streaming Video
- In 12th International Packet Video Workshop
, 2002
"... While there is an increasing demand for streaming video applications on the Internet, various network characteristics make the deployment of these applications more challenging than traditional TCP-based applications like email and the Web. Packet loss can be detrimental to compressed video with int ..."
Abstract
-
Cited by 73 (2 self)
- Add to MetaCart
(Show Context)
While there is an increasing demand for streaming video applications on the Internet, various network characteristics make the deployment of these applications more challenging than traditional TCP-based applications like email and the Web. Packet loss can be detrimental to compressed video with interdependent frames because errors potentially propagate across many frames. While latency requirements do not permit retransmission of all lost data, we leverage the characteristics of MPEG-4 to selectively retransmit only the most important data in the bitstream. When latency constraints do not permit retransmission, we propose a mechanism for recovering this data using postprocessing techniques at the receiver. We quantify the effects of packet loss on the quality of MPEG-4 video, develop an analytical model to explain these effects, present a system to adaptively deliver MPEG-4 video in the face of packet loss and variable Internet conditions, and evaluate the effectiveness of the system under various network conditions.
One More Bit Is Enough
- in Proceedings of ACM SIGCOMM
, 2005
"... Achieving efficient and fair bandwidth allocation while minimizing packet loss and bottleneck queue in high bandwidthdelay product networks has long been a daunting challenge. Existing end-to-end congestion control (e.g., TCP) and traditional congestion notification schemes (e.g., TCP+AQM/ ECN) have ..."
Abstract
-
Cited by 67 (1 self)
- Add to MetaCart
(Show Context)
Achieving efficient and fair bandwidth allocation while minimizing packet loss and bottleneck queue in high bandwidthdelay product networks has long been a daunting challenge. Existing end-to-end congestion control (e.g., TCP) and traditional congestion notification schemes (e.g., TCP+AQM/ ECN) have significant limitations in achieving this goal. While the XCP protocol addresses this challenge, it requires multiple bits to encode the congestion-related information exchanged between routers and end-hosts. Unfortunately, there is no space in the IP header for these bits, and solving this problem involves a non-trivial and time-consuming standardization process. In this paper, we design and implement a simple, lowcomplexity protocol, called Variable-structure congestion Control Protocol (VCP), that leverages only the existing two ECN bits for network congestion feedback, and yet achieves comparable performance to XCP, i.e., high utilization, negligible packet loss rate, low persistent queue length, and reasonable fairness. On the downside, VCP converges significantly slower to a fair allocation than XCP. We evaluate the performance of VCP using extensive ns2 simulations over a wide range of network scenarios and find that it significantly outperforms many recently-proposed TCP variants, such as HSTCP, FAST, and CUBIC. To gain insight into the behavior of VCP, we analyze a simplified fluid model and prove its global stability for the case of a single bottleneck shared by synchronous flows with identical round-trip times. 1.
Theories and Models for Internet Quality of Service
, 2002
"... We survey recent advances in theories and models for Internet Quality of Service (QoS). We start with the theory of network calculus, which lays the foundation for support of deterministic performance guarantees in networks, and illustrate its applications to integrated services, differentiated serv ..."
Abstract
-
Cited by 64 (1 self)
- Add to MetaCart
We survey recent advances in theories and models for Internet Quality of Service (QoS). We start with the theory of network calculus, which lays the foundation for support of deterministic performance guarantees in networks, and illustrate its applications to integrated services, differentiated services, and streaming media playback delays. We also present mechanisms and architecture for scalable support of guaranteed services in the Internet, based on the concept of a stateless core. Methods for scalable control operations are also briefly discussed. We then turn our attention to statistical performance guarantees, and describe several new probabilistic results that can be used for a statistical dimensioning of differentiated services. Lastly, we review recent proposals and results in supporting performance guarantees in a best effort context. These include models for elastic throughput guarantees based on TCP performance modeling, techniques for some quality of service differentiation without access control, and methods that allow an application to control the performance it receives, in the absence of network support.
Approximate Fairness through Differential Dropping
, 2001
"... Many researchers have argued that the Internet architecture would be more robust and more accommodating of heterogeneity if routers allocated bandwidth fairly. However, most of the mechanisms proposed to accomplish this, such as Fair Queueing [16], [6] and its many variants [2], [23], [15], involve ..."
Abstract
-
Cited by 61 (7 self)
- Add to MetaCart
Many researchers have argued that the Internet architecture would be more robust and more accommodating of heterogeneity if routers allocated bandwidth fairly. However, most of the mechanisms proposed to accomplish this, such as Fair Queueing [16], [6] and its many variants [2], [23], [15], involve complicated packet scheduling algorithms. These algorithms, while increasingly common in router designs, may not be inexpensively implementable at extremely high speeds; thus, finding more easily implementable variants of such algorithms may be of significant practical value. This paper proposes an algorithm that -- similar to FRED [13], CSFQ [24], and several other designs [17], [14], [5], [25] -- combines FIFO packet scheduling with differential dropping on arrival. Our design, called Approximate Fair Dropping (AFD), bases these dropping decisions on the recent history of packet arrivals. AFD retains a simple forwarding path and requires an amount of additional state that is small compared to current packet buffers. Simulation results, which we describe here, suggest that the design provides a reasonable degree of fairness in a wide variety of operating conditions. The performance of our approach is aided by the fact that the vast majority of Internet flows are slow but the fast flows send the bulk of the bits. This allows a small sample of recent history to provide accurate rate estimates of the fast flows.
Exploiting the Transients of Adaptation for RoQ Attacks on Internet Resources
- IN PROCEEDINGS OF THE 12TH IEEE INTERNATIONAL CONFERENCE ON NETWORK PROTOCOLS (ICNP’04
, 2004
"... In this paper, we expose an unorthodox adversarial attack that exploits the transients of a system's adaptive behavior, as opposed to its limited steady-state capacity. We show that a well orchestrated attack could introduce significant inefficiencies that could potentially deprive a network el ..."
Abstract
-
Cited by 59 (12 self)
- Add to MetaCart
(Show Context)
In this paper, we expose an unorthodox adversarial attack that exploits the transients of a system's adaptive behavior, as opposed to its limited steady-state capacity. We show that a well orchestrated attack could introduce significant inefficiencies that could potentially deprive a network element from much of its capacity, or significantly reduce its service quality, while evading detection by consuming an unsuspicious, small fraction of that element's hijacked capacity. This type of attack stands in sharp contrast to traditional brute-force, sustained high-rate DoS attacks, as well as recently proposed attacks that exploit specific protocol settings such as TCP timeouts. We exemplify what we term as Reduction of Quality (RoQ) attacks by exposing the vulnerabilities of common adaptation mechanisms. We develop control-theoretic models and associated metrics to quantify these vulnerabilities. We present numerical and simulation results, which we validate with observations from real Internet experiments. Our findings motivate the need for the development of adaptation mechanisms that are resilient to these new forms of attacks.