Results 1  10
of
156
Sizing Router Buffers
, 2004
"... All Internet routers contain buffers to hold packets during times of congestion. Today, the size of the buffers is determined by the dynamics of TCP’s congestion control algorithm. In particular, the goal is to make sure that when a link is congested, it is busy 100 % of the time; which is equivalen ..."
Abstract

Cited by 350 (18 self)
 Add to MetaCart
(Show Context)
All Internet routers contain buffers to hold packets during times of congestion. Today, the size of the buffers is determined by the dynamics of TCP’s congestion control algorithm. In particular, the goal is to make sure that when a link is congested, it is busy 100 % of the time; which is equivalent to making sure its buffer never goes empty. A widely used ruleofthumb states that each link needs a buffer of size B = RT T × C, where RT T is the average roundtrip time of a flow passing across the link, and C is the data rate of the link. For example, a 10Gb/s router linecard needs approximately 250ms × 10Gb/s = 2.5Gbits of buffers; and the amount of buffering grows linearly with the linerate. Such large buffers are challenging for router manufacturers, who must use large, slow, offchip DRAMs. And queueing delays can be long, have high variance, and may destabilize the congestion control algorithms. In this paper we argue that the ruleofthumb (B = RT T ×C) is now outdated and incorrect for backbone routers. This is because of the large number of flows (TCP connections) multiplexed together on a single backbone link. Using theory, simulation and experiments on a network of real routers, we show that a link with n flows requires no more than B = (RT T × C) / √ n, for longlived or shortlived TCP flows. The consequences on router design are enormous: A 2.5Gb/s link carrying 10,000 flows could reduce its buffers by 99 % with negligible difference in throughput; and a 10Gb/s link carrying 50,000 flows requires only 10Mbits of buffering, which can easily be implemented using fast, onchip SRAM.
A measurementbased admission control algorithm for integrated services packet networks
 IEEE/ACM TRANSACTIONS ON NETWORKING
, 1997
"... Many designs for integrated service networks offer a bounded delay packet delivery service to support realtime applications. To provide bounded delay service, networks must use admission control to regulate their load. Previous work on admission control mainly focused on algorithms that compute the ..."
Abstract

Cited by 339 (10 self)
 Add to MetaCart
Many designs for integrated service networks offer a bounded delay packet delivery service to support realtime applications. To provide bounded delay service, networks must use admission control to regulate their load. Previous work on admission control mainly focused on algorithms that compute the worst case theoretical queueing delay to guarantee an absolute delay bound for all packets. In this paper we describe a measurementbased admission control algorithm for predictive service, which allows occasional delay violations. We have tested our algorithm through simulations on a wide variety of network topologies and driven with various source models, including some that exhibit longrange dependence, both in themselves and in their aggregation. Our simulation results suggest that, at least for the scenarios studied here, the measurementbased approach combined with the relaxed service commitment of predictive service enables us to achieve a high
Admission Control for Statistical QoS: Theory and Practice
, 1999
"... In networks that support Quality of Service (QoS), an admission control algorithm determines whether or not a new traffic flow can be admitted to the network such that all users will receive their required performance. Such an algorithm is a key component of future multiservice networks as it deter ..."
Abstract

Cited by 130 (13 self)
 Add to MetaCart
In networks that support Quality of Service (QoS), an admission control algorithm determines whether or not a new traffic flow can be admitted to the network such that all users will receive their required performance. Such an algorithm is a key component of future multiservice networks as it determines the extent to which network resources are utilized and whether the promised QoS parameters are actually delivered. Our goals in this paper are threefold. First, we describe and classify a broad set of proposed admission control algorithms. Second, we evaluate the accuracy of these algorithms via experiments using both onoff sources and long traces of compressed video; we compare the admissible regions and QoS parameters predicted by our implementations of the algorithms with those obtained from tracedriven simulations. Finally, we identify the key aspects of an admission control algorithm necessary for achieving a high degree of accuracy and hence a high statistical multiplexing gain...
MeasurementBased Connection Admission Control
, 1997
"... ... In this paper we continue the development of a modelling approach which attempts to integrate these several timescales, and illustrate its application to the analysis of a family of simple and robust measurementbased admission controls. A subsidiary aim of the paper is to shed light on the rel ..."
Abstract

Cited by 90 (2 self)
 Add to MetaCart
... In this paper we continue the development of a modelling approach which attempts to integrate these several timescales, and illustrate its application to the analysis of a family of simple and robust measurementbased admission controls. A subsidiary aim of the paper is to shed light on the relationship between the admission control proposed for ATM networks by Gibbens et al [9] and that proposed for controlledload Internet services by Floyd [7]. We shall see that their common origin in Chernoff bounds allows the definition of a simple and general family of admission controls, capable of tailoring for several implementation scenarios.
A Network Calculus with Effective Bandwidth
, 2003
"... We present a statistical network calculus in a setting where both arrivals and service are specified interms of probabilistic bounds. We provide explicit bounds on delay, backlog, and output burstiness in a network. By formulating wellknown effective bandwidth expressions in terms of envelope func ..."
Abstract

Cited by 67 (13 self)
 Add to MetaCart
(Show Context)
We present a statistical network calculus in a setting where both arrivals and service are specified interms of probabilistic bounds. We provide explicit bounds on delay, backlog, and output burstiness in a network. By formulating wellknown effective bandwidth expressions in terms of envelope functions,we are able to apply our calculus to a wide range of traffic source models, including Fractional Brownian Motion. We present probabilistic lower bounds on the service for three scheduling algorithms: Static Priority (SP), Earliest Deadline First (EDF), and Generalized Processor Sharing (GPS).
Allocating Bandwidth for Bursty Connections
 SIAM J. Comput
, 1997
"... Abstract. In this paper, we undertake the first study of statistical multiplexing from the perspective of approximation algorithms. The basic issue underlying statistical multiplexing is the following: in highspeed networks, individual connections (i.e., communication sessions) are very bursty, wit ..."
Abstract

Cited by 66 (0 self)
 Add to MetaCart
Abstract. In this paper, we undertake the first study of statistical multiplexing from the perspective of approximation algorithms. The basic issue underlying statistical multiplexing is the following: in highspeed networks, individual connections (i.e., communication sessions) are very bursty, with transmission rates that vary greatly over time. As such, the problem of packing multiple connections together on a link becomes more subtle than in the case when each connection is assumed to have a fixed demand. We consider one of the most commonly studied models in this domain: that of two communicating nodes connected by a set of parallel edges, where the rate of each connection between them is a random variable. We consider three related problems: (1) stochastic load balancing, (2) stochastic binpacking, and (3) stochastic knapsack. In the first problem the number of links is given and we want to minimize the expected value of the maximum load. In the other two problems the link capacity and an allowed overflow probability p are given, and the objective is to assign connections to links, so that the probability that the load of a link exceeds the link capacity is at most p. In binpacking we need to assign each connection to a link using as few links as possible. In the knapsack problem each connection has a value, and we have only one link. The problem is to accept as many
Charging and Accounting for Bursty Connections
 Internet Economics
, 1996
"... Statistical sharing over several timescales is a key feature of the Internet, and is likely to be an essential aspect of future ATM networks. ..."
Abstract

Cited by 60 (5 self)
 Add to MetaCart
Statistical sharing over several timescales is a key feature of the Internet, and is likely to be an essential aspect of future ATM networks.
Pricing Network Resources for Adaptive Applications in a Differentiated Services Network
, 2001
"... The Differentiated Services framework (DiffServ) has been proposed to provide multiple Quality of Service (QoS) classes over IP networks. A network supporting multiple classes of service also requires a differentiated pricing structure. We propose a pricing scheme in a DiffServ environment based on ..."
Abstract

Cited by 59 (2 self)
 Add to MetaCart
The Differentiated Services framework (DiffServ) has been proposed to provide multiple Quality of Service (QoS) classes over IP networks. A network supporting multiple classes of service also requires a differentiated pricing structure. We propose a pricing scheme in a DiffServ environment based on the cost of providing different levels of quality of service to different classes, and on longterm demand. Pricing of network services dynamically based on the level of service, usage, and congestion allows a more competitive price to be offered, allows the network to be used more efficiently, and provides a natural and equitable incentive for applications to adapt their service contract according to network conditions. We develop a DiffServ simulation framework to compare the performance of a network supporting congestionsensitive pricing and adaptive service negotiation to that of a network with a static pricing policy. Adaptive users adapt to price changes by adjusting their sending rate or selecting a different service class. We also develop the demand behavior of adaptive users based on a perceptually reasonable user utility function. Simulation results show that a congestionsensitive pricing policy coupled with user rate adaptation is able to control congestion and allow a service class to meet its performance assurances under large or bursty offered loads, even without explicit admission control. Users are able to maintain a stable expenditure. Allowing users to migrate between service classes in response to price increases further stabilizes the individual service prices. When admission control is enforced, congestionsensitive pricing still provides an advantage in terms of a much lower connection blocking rate at high loads. I.
Effective bandwidths with priorities
 IEEE/ACM Transactions on Networking
, 1998
"... Abstract — The notion of effective bandwidths has provided a useful practical framework for connection admission control and capacity planning in highspeed communication networks. The associated admissible set with a single linear boundary makes it possible to apply stochasticlossnetwork (general ..."
Abstract

Cited by 51 (1 self)
 Add to MetaCart
(Show Context)
Abstract — The notion of effective bandwidths has provided a useful practical framework for connection admission control and capacity planning in highspeed communication networks. The associated admissible set with a single linear boundary makes it possible to apply stochasticlossnetwork (generalizedErlang) models for capacity planning. In this paper we consider the case of network nodes that use a priorityservice discipline to support multiple classes of service, and we wish to determine an appropriate notion of effective bandwidths. Just as was done previously for the firstin firstout discipline, we use largebuffer asymptotics (large deviations principles) for workload tail probabilities as a theoretical basis. We let each priority class have its own buffer and its own constraint on the probability of buffer overflow. Unfortunately, however, this leads to a constraint for each priority class. Moreover, the largebuffer asymptotic theory with priority classes does not produce an admissible set with linear boundaries, but we show that it nearly does and that a natural bound on the admissible set does have this property. We propose it as an approximation for priority classes. Then there is one linear constraint for each priority class. This linearadmissibleset structure implies a new notion of effective bandwidths, where a given connection is associated with multiple effective bandwidths: one for the priority level of the given connection and one for each lower priority level. This structure can be used regardless of whether the individual effective bandwidths are determined by largebuffer asymptotics or by some other method. 1
MeasurementBased Usage Charges in Communications Networks
 Operations Research
, 1997
"... This paper describes methods of computing usage charges from simple measurements and relating these to bounds on the effective bandwidth. Thus we show that charging for usage on the basis of effective bandwidths can be wellapproximated by charges based on simple measurements. Charging and pricing a ..."
Abstract

Cited by 46 (8 self)
 Add to MetaCart
This paper describes methods of computing usage charges from simple measurements and relating these to bounds on the effective bandwidth. Thus we show that charging for usage on the basis of effective bandwidths can be wellapproximated by charges based on simple measurements. Charging and pricing are essential requirements in the operation of a communication network. They are needed not only to recover costs and make a profit. Even if a generous operator is willing to offer a network for free, there are still compelling reasons to charges for services in order to exercise control. The congestion that has plagued the Internet because it lacks any mechanism for charging and pricing highlights the fact that without charges it is difficult to control congestion or divide network resources amongst users in a workable and stable way. Subject classifications: Communications: measurementbased charging. Of course there are many considerations that influence the prices at which an operator will choose to sell network services. Marketing and regulation are certainly important, but these considerations are not unique to the operation of a communications network. Special considerations do, however, arise from the fact that a broadband communications network is intended simultaneously to carry a wide variety of traffic types. Our conception of a broadband network is that of a collection of resources (links, buffers, switches, etc.) which can be used to provide a wide variety of communications services. These services are distinguished by traffic contracts, which specify parameters to which the traffic must adhere (a maximum peak rate, for example), and the quality of service which the network undertakes to guarantee (typically, cell loss or delay). These concepts are accepted as ...