Results 1  10
of
105
Sizing Router Buffers
, 2004
"... All Internet routers contain buffers to hold packets during times of congestion. Today, the size of the buffers is determined by the dynamics of TCP’s congestion control algorithm. In particular, the goal is to make sure that when a link is congested, it is busy 100 % of the time; which is equivalen ..."
Abstract

Cited by 352 (17 self)
 Add to MetaCart
(Show Context)
All Internet routers contain buffers to hold packets during times of congestion. Today, the size of the buffers is determined by the dynamics of TCP’s congestion control algorithm. In particular, the goal is to make sure that when a link is congested, it is busy 100 % of the time; which is equivalent to making sure its buffer never goes empty. A widely used ruleofthumb states that each link needs a buffer of size B = RT T × C, where RT T is the average roundtrip time of a flow passing across the link, and C is the data rate of the link. For example, a 10Gb/s router linecard needs approximately 250ms × 10Gb/s = 2.5Gbits of buffers; and the amount of buffering grows linearly with the linerate. Such large buffers are challenging for router manufacturers, who must use large, slow, offchip DRAMs. And queueing delays can be long, have high variance, and may destabilize the congestion control algorithms. In this paper we argue that the ruleofthumb (B = RT T ×C) is now outdated and incorrect for backbone routers. This is because of the large number of flows (TCP connections) multiplexed together on a single backbone link. Using theory, simulation and experiments on a network of real routers, we show that a link with n flows requires no more than B = (RT T × C) / √ n, for longlived or shortlived TCP flows. The consequences on router design are enormous: A 2.5Gb/s link carrying 10,000 flows could reduce its buffers by 99 % with negligible difference in throughput; and a 10Gb/s link carrying 50,000 flows requires only 10Mbits of buffering, which can easily be implemented using fast, onchip SRAM.
Samsara: Honor among thieves in peertopeer storage.
 In Proc. SOSP’03,
, 2003
"... ABSTRACT Peertopeer storage systems assume that their users consume resources in proportion to their contribution. Unfortunately, users are unlikely to do this without some enforcement mechanism. Prior solutions to this problem require centralized infrastructure, constraints on data placement, or ..."
Abstract

Cited by 161 (2 self)
 Add to MetaCart
(Show Context)
ABSTRACT Peertopeer storage systems assume that their users consume resources in proportion to their contribution. Unfortunately, users are unlikely to do this without some enforcement mechanism. Prior solutions to this problem require centralized infrastructure, constraints on data placement, or ongoing administrative costs. All of these run counter to the design philosophy of peertopeer systems. Samsara enforces fairness in peertopeer storage systems without requiring trusted third parties, symmetric storage relationships, monetary payment, or certified identities. Each peer that requests storage of another must agree to hold a claim in returna placeholder that accounts for available space. After an exchange, each partner checks the other to ensure faithfulness. Samsara punishes unresponsive nodes probabilistically. Because objects are replicated, nodes with transient failures are unlikely to suffer data loss, unlike those that are dishonest or chronically unavailable. Claim storage overhead can be reduced when necessary by forwarding among chains of nodes, and eliminated when cycles are created. Forwarding chains increase the risk of exposure to failure, but such risk is modest under reasonable assumptions of utilization and simultaneous, persistent failure.
Congestion control for high performance, stability, and fairness in general networks
 IEEE/ACM TRANS. ON NETWORKING
, 2005
"... This paper is aimed at designing a congestion control system that scales gracefully with network capacity, providing high utilization, low queueing delay, dynamic stability, and fairness among users. The focus is on developing decentralized control laws at endsystems and routers at the level of flu ..."
Abstract

Cited by 86 (14 self)
 Add to MetaCart
This paper is aimed at designing a congestion control system that scales gracefully with network capacity, providing high utilization, low queueing delay, dynamic stability, and fairness among users. The focus is on developing decentralized control laws at endsystems and routers at the level of fluidflow models, that can provably satisfy such properties in arbitrary networks, and subsequently approximate these features through practical packetlevel implementations. Two families of control laws are developed. The first “dual ” control law is able to achieve the first three objectives for arbitrary networks and delays, but is forced to constrain the resource allocation policy. We subsequently develop a “primaldual” law that overcomes this limitation and allows sources to match their steadystate preferences at a slower timescale, provided a bound on roundtriptimes is known. We develop two packetlevel implementations of this protocol, using 1) ECN marking, and 2) queueing delay, as means of communicating the congestion measure from links to sources. We demonstrate using ns2 simulations the stability of the protocol and its equilibrium features in terms of utilization, queueing and fairness, under a variety of scaling parameters.
Linear Stability of TCP/RED and a Scalable Control
, 2003
"... We demonstrate that the dynamic behavior of queue and average window is determined predominantly by the stability of TCP/RED, not by AIMD probing nor noise tra#c. We develop a general multilink multisource model for TCP/RED and derive a local stability condition in the case of a single link wit ..."
Abstract

Cited by 72 (20 self)
 Add to MetaCart
We demonstrate that the dynamic behavior of queue and average window is determined predominantly by the stability of TCP/RED, not by AIMD probing nor noise tra#c. We develop a general multilink multisource model for TCP/RED and derive a local stability condition in the case of a single link with heterogeneous sources. We validate our model with simulations and illustrate the stability region of TCP/RED. These results suggest that TCP/RED becomes unstable when delay increases, or more strikingly, when link capacity increases. The analysis illustrates the di#culty of setting RED parameters to stabilize TCP: they can be tuned to improve stability, but only at the cost of large queues even when they are dynamically adjusted.
Exploiting the Transients of Adaptation for RoQ Attacks on Internet Resources
 IN PROCEEDINGS OF THE 12TH IEEE INTERNATIONAL CONFERENCE ON NETWORK PROTOCOLS (ICNP’04
, 2004
"... In this paper, we expose an unorthodox adversarial attack that exploits the transients of a system's adaptive behavior, as opposed to its limited steadystate capacity. We show that a well orchestrated attack could introduce significant inefficiencies that could potentially deprive a network el ..."
Abstract

Cited by 59 (12 self)
 Add to MetaCart
(Show Context)
In this paper, we expose an unorthodox adversarial attack that exploits the transients of a system's adaptive behavior, as opposed to its limited steadystate capacity. We show that a well orchestrated attack could introduce significant inefficiencies that could potentially deprive a network element from much of its capacity, or significantly reduce its service quality, while evading detection by consuming an unsuspicious, small fraction of that element's hijacked capacity. This type of attack stands in sharp contrast to traditional bruteforce, sustained highrate DoS attacks, as well as recently proposed attacks that exploit specific protocol settings such as TCP timeouts. We exemplify what we term as Reduction of Quality (RoQ) attacks by exposing the vulnerabilities of common adaptation mechanisms. We develop controltheoretic models and associated metrics to quantify these vulnerabilities. We present numerical and simulation results, which we validate with observations from real Internet experiments. Our findings motivate the need for the development of adaptation mechanisms that are resilient to these new forms of attacks.
A Mathematical Framework for Designing a LowLoss, LowDelay Internet
 Network and Spatial Economics
, 2003
"... We survey some recent results on modeling, analysis and design of congestion control schemes for the Internet. Using tools from convex optimization and control theory, we show that congestion controllers can be viewed as distributed algorithms for achieving fair resource allocation among competin ..."
Abstract

Cited by 54 (7 self)
 Add to MetaCart
We survey some recent results on modeling, analysis and design of congestion control schemes for the Internet. Using tools from convex optimization and control theory, we show that congestion controllers can be viewed as distributed algorithms for achieving fair resource allocation among competing sources. We illustrate the use of simple mathematical models to analyze the behavior of currently deployed Internet congestion control protocols as well as to design new protocols for networks with large capacities, delays and general topology. These new protocols are designed to nearly eliminate loss and queueing delay in the Internet, yet achieving high utilization and any desired fairness.
Limit Behavior of ECN/RED Gateways Under a Large Number of TCP Flows
 in Proceedings of IEEE INFOCOM
, 2003
"... We consider a stochastic model of an ECN/RED gateway with competing TCP sources sharing the capacity. As the number of competing flows becomes large, the queue behavior at the gateway can be described by a twodimensional recursion and the throughput behavior of individual TCP flows becomes asymptot ..."
Abstract

Cited by 51 (6 self)
 Add to MetaCart
We consider a stochastic model of an ECN/RED gateway with competing TCP sources sharing the capacity. As the number of competing flows becomes large, the queue behavior at the gateway can be described by a twodimensional recursion and the throughput behavior of individual TCP flows becomes asymptotically independent. The steadystate regime of the limiting behavior can be calculated from a wellknown TCP throughput model with fixed loss probability. In addition, a Central Limit Theorem is presented, yielding insight into the relationship between the queue fluctuation and the marking probability function. We confirm the results by simulations and discuss their implications for network dimensioning.
Stabilized Vegas
, 2002
"... We show that the current TCP Vegas algorithm can become unstable in the presence of network delay and propose a modification that stabilizes it. The stabilized Vegas remains completely sourcebased and can be implemented without any network support. We suggest an incremental deployment strategy for ..."
Abstract

Cited by 43 (14 self)
 Add to MetaCart
We show that the current TCP Vegas algorithm can become unstable in the presence of network delay and propose a modification that stabilizes it. The stabilized Vegas remains completely sourcebased and can be implemented without any network support. We suggest an incremental deployment strategy for stabilized Vegas when the network contains a mix of links, some with active queue management and some without.
A new TCP/AQM for Stable Operation in Fast Networks
 in Proc. IEEE INFOCOM
, 2003
"... This paper is aimed at designing a congestion control system that scales gracefully with network capacity, providing high utilization, low queueing delay, dynamic stability, and fairness among users. In earlier work we had developed fluidlevel control laws that achieve the first three objectives fo ..."
Abstract

Cited by 37 (7 self)
 Add to MetaCart
This paper is aimed at designing a congestion control system that scales gracefully with network capacity, providing high utilization, low queueing delay, dynamic stability, and fairness among users. In earlier work we had developed fluidlevel control laws that achieve the first three objectives for arbitrary networks and delays, but were forced to constrain the resource allocation policy. In this paper we extend the theory to include dynamics at TCP sources, preserving the earlier features at fast timescales, but permitting sources to match their steadystate preferences, provided a bound on roundtriptimes is known.
RateBased versus QueueBased Models of Congestion Control
 in Proceedings of ACM SIGMETRICS
, 2004
"... Mathematical models of congestion control capture the congestion indication mechanism at the router in two different ways: ratebased models, where the queuelength at the router does not explicitly appear in the model, and queuebased models, where the queue length at the router is explicitly a p ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
(Show Context)
Mathematical models of congestion control capture the congestion indication mechanism at the router in two different ways: ratebased models, where the queuelength at the router does not explicitly appear in the model, and queuebased models, where the queue length at the router is explicitly a part of the model. Even though most congestion indication mechanisms use the queue length to compute the packet marking or dropping probability to indicate congestion, we argue that, depending upon the choice of the parameters of the AQM scheme, one would obtain a ratebased model or a rateandqueuebased model as the deterministic limit of a stochastic system with a large number of users.