Results 1  10
of
222
Bandwidth Sharing: Objectives and Algorithms
 IEEE/ACM Transactions on Networking
, 1999
"... This paper concerns the design of distributed algorithms for sharing network bandwidth resources among contending flows. The classical fairness notion is the socalled maxmin fairness; F. Kelly [8] has recently introduced the alternative proportional fairness criterion; we introduce a third crit ..."
Abstract

Cited by 334 (11 self)
 Add to MetaCart
(Show Context)
This paper concerns the design of distributed algorithms for sharing network bandwidth resources among contending flows. The classical fairness notion is the socalled maxmin fairness; F. Kelly [8] has recently introduced the alternative proportional fairness criterion; we introduce a third criterion, which is naturally interpreted in terms of the delays experienced by ongoing transfers. We prove that fixed size window control can achieve fair bandwidth sharing according to any of these criteria, provided scheduling at each link is performed in an appropriate manner. We next consider a distributed random scheme where each traffic source varies its sending rate randomly, based on binary feedback information from the network. We show how to select the source behaviour so as to achieve an equilibrium distribution concentrated around the considered fair rate allocations. This stochastic analysis is then used to assess the asymptotic behaviour of deterministic rate adaption proc...
Impact of Fairness on Internet Performance
 IN PROCEEDINGS OF ACM SIGMETRICS
, 2000
"... We discuss the relevance of fairness as a design objective for congestion control mechanisms in the Internet. Specifically, we consider a backbone network shared by a dynamic number of shortlived flows, and study the impact of bandwidth sharing on network performance. In particular, we prove that f ..."
Abstract

Cited by 219 (14 self)
 Add to MetaCart
We discuss the relevance of fairness as a design objective for congestion control mechanisms in the Internet. Specifically, we consider a backbone network shared by a dynamic number of shortlived flows, and study the impact of bandwidth sharing on network performance. In particular, we prove that for a broad class of fair bandwidth allocations, the total number of ows in progress remains finite if the load of every link is less than one. We also show that provided the bandwidth allocation is "sufficiently" fair, performance is optimal in the sense that the throughput of the ows is mainly determined by their access rate. Neither property is guaranteed with unfair bandwidth allocations, when priority is given to one class of ow with respect to another. This suggests current proposals for a differentiated services Internet may lead to suboptimal utilization of network resources.
Statistical bandwidth sharing: a study of congestion at flow level
, 2001
"... In this paper we study the statistics of the realized throughput of elastic document transfers, accounting for the way network bandwidth is shared dynamically between the randomly varying number of concurrent flows. We first discuss the way TCP realizes statistical bandwidth sharing, illustrating es ..."
Abstract

Cited by 218 (23 self)
 Add to MetaCart
(Show Context)
In this paper we study the statistics of the realized throughput of elastic document transfers, accounting for the way network bandwidth is shared dynamically between the randomly varying number of concurrent flows. We first discuss the way TCP realizes statistical bandwidth sharing, illustrating essential properties by means of packet level simulations. Mathematical flow level models based on the theory of stochastic networks are then proposed to explain the observed behavior. A notable result is that first order performance (e.g., mean throughput) is insensitive with respect both to the flow size distribution and the flow arrival process, as long as “sessions ” arrive according to a Poisson process. Perceived performance is shown to depend most significantly on whether demand at flow level is less than or greater than available capacity. The models provide a key to understanding the effectiveness of techniques for congestion management and service differentiation. 1.
Analysis of SRPT scheduling: Investigating unfairness
 In Proceedings of ACM SIGMETRICS
, 2001
"... The ShortestRemainingProcessingTime (SRPT) scheduling policy has long been known to be optimal for minimizing mean response time (sojourn time). Despite this fact, SRPT scheduling is rarely used in practice. It is believed that the performance improvements of SRPT over other scheduling policies s ..."
Abstract

Cited by 174 (17 self)
 Add to MetaCart
(Show Context)
The ShortestRemainingProcessingTime (SRPT) scheduling policy has long been known to be optimal for minimizing mean response time (sojourn time). Despite this fact, SRPT scheduling is rarely used in practice. It is believed that the performance improvements of SRPT over other scheduling policies stem from the fact that SRPT unfairly penalizes the large jobs in order to help the small jobs. This belief has led people to instead adopt “fair ” scheduling policies such as ProcessorSharing (PS), which produces the same expected slowdown for jobs of all sizes. This paper investigates formally the problem of unfairness in SRPT scheduling as compared with PS scheduling. The analysis assumes an M/G/1 model, and emphasizes job size distributions with a heavytailed property, as are characteristic of empirical workloads. The analysis shows that the degree of unfairness under SRPT is surprisingly small. The M/G/1/SRPT and M/G/1/PS queues are also analyzed under overload and closedform expressions for mean response time as a function of job size are proved in this setting.
Sizebased Scheduling to Improve Web Performance
"... Is it possible to reduce the expected response time ofevery request at a web server, simply by changing the order in which we schedule the requests? That is the question we ask in this paper. This paper proposes a method for improving the performance of web servers servicing static HTTP requests. Th ..."
Abstract

Cited by 140 (14 self)
 Add to MetaCart
(Show Context)
Is it possible to reduce the expected response time ofevery request at a web server, simply by changing the order in which we schedule the requests? That is the question we ask in this paper. This paper proposes a method for improving the performance of web servers servicing static HTTP requests. The idea is to give preference to those requests which are short, or have small remaining processing requirements, in accordance with the SRPT (Shortest Remaining Processing Time) scheduling policy. The implementation is at the kernel level and involves controlling the order in which socket buffers are drained into the network.Experiments are executed both in a LAN and a WAN environment. We use the Linux operating system and the Apache and Flash web servers. Results indicate that SRPTbased scheduling of connections yields significant reductions in delay at the web server. These result in a substantial reduction inmean response time, mean slowdown, and variance in response time for both the LAN and WAN environments. Significantly, and counter to intuition, the large requests are only negligibly penalized or not at all penalized as a result of SRPTbased scheduling.
Stability and Performance Analysis of Networks Supporting Elastic Services
 IEEE/ACM Transactions on Networking
, 2001
"... AbstractWe consider the stability and performance of a model for networks supporting services that adapt their transmission to the available bandwidth. Not unlike real networks, in our model, connection arrivals are stochastic, each has a random amount of data to send, and the number of ongoing co ..."
Abstract

Cited by 120 (6 self)
 Add to MetaCart
(Show Context)
AbstractWe consider the stability and performance of a model for networks supporting services that adapt their transmission to the available bandwidth. Not unlike real networks, in our model, connection arrivals are stochastic, each has a random amount of data to send, and the number of ongoing connections in the system changes over time. Consequently, the bandwidth allocated to, or throughput achieved by, a given connection may change during its lifetime as feedback control mechanisms react to network loads. Ideally, if there were a fixed number of ongoing connections, such feedback mechanisms would reach an equilibrium bandwidth allocation typically characterized in terms of its &quot;fairness &quot; to users, e.g., maxmin or proportionally fair. In this paper we prove the stability of such networks when the offered load on each link does not exceed its capacity. We use simulation to investigate performance, in terms of average connection delays, for various fairness criteria. Finally, we pose an architectural problem in TCP/IPs decoupling of the transport and network layer from the point of view of guaranteeing connectionlevel stability, which we claim may explain congestion phenomena on the Internet. Index TermsABR service, bandwidth allocation, Lyapunov functions, performance analysis, proportional fairness, rate control, stability, TCP/IP, weighted maxmin fairness. F I.
Classifying scheduling policies with respect to unfairness in an M/GI/1
 Proc. of SIGMETRICS’03
, 2003
"... It is common to classify scheduling policies based on their mean response times. Another important, but sometimes opposing, performance metric is a scheduling policy’s fairness. For example, a policy that biases towards short jobs so as to minimize mean response time, may end up being unfair to long ..."
Abstract

Cited by 97 (18 self)
 Add to MetaCart
It is common to classify scheduling policies based on their mean response times. Another important, but sometimes opposing, performance metric is a scheduling policy’s fairness. For example, a policy that biases towards short jobs so as to minimize mean response time, may end up being unfair to long jobs. In this paper we define three types of unfairness and demonstrate large classes of scheduling policies that fall into each type. We end with a discussion on which jobs are the ones being treated unfairly. 1
Fluid Model for a Network Operating under a Fair BandwidthSharing Policy
 Annals of Applied Probability
, 2004
"... We consider a model of Internet congestion control, that represents the randomly varying number of ows present in a network where bandwidth is shared fairly between document transfers. We study critical uid models, obtained as formal limits under law of large numbers scalings when the average lo ..."
Abstract

Cited by 75 (8 self)
 Add to MetaCart
(Show Context)
We consider a model of Internet congestion control, that represents the randomly varying number of ows present in a network where bandwidth is shared fairly between document transfers. We study critical uid models, obtained as formal limits under law of large numbers scalings when the average load on at least one resource is equal to its capacity. We establish convergence to equilibria for uid models, and identify the invariant manifold. The form of the invariant manifold gives insight into the phenomenon of entrainment, whereby congestion at some resources may prevent other resources from working at their full capacity.
Insensitive bandwidth sharing in data networks
 QUEUEING SYSTEMS
, 2003
"... We represent a data network as a set of links shared by a dynamic number of competing flows. These flows are generated within sessions and correspond to the transfer of a random volume of data on a predefined network route. The evolution of the stochastic process describing the number of flows on a ..."
Abstract

Cited by 72 (8 self)
 Add to MetaCart
(Show Context)
We represent a data network as a set of links shared by a dynamic number of competing flows. These flows are generated within sessions and correspond to the transfer of a random volume of data on a predefined network route. The evolution of the stochastic process describing the number of flows on all routes, which determines the performance of the data transfers, depends on how link capacity is allocated between competing flows. We use some key properties of Whittle queueing networks to characterize the class of allocations which are insensitive in the sense that the stationary distribution of this stochastic process does not depend on any traffic characteristics (session structure, data volume distribution) except the traffic intensity on each route. We show in particular that this insensitivity property does not hold in general for wellknown allocations such as maxmin fairness or proportional fairness. These results are ilustrated by several examples on a number of network topologies.
Network optimization and control
 Foundations and Trends in Networking
"... We study how protocol design for various functionalities within a communication network architecture can be viewed as a distributed resource allocation problem. This involves understanding what resources are, how to allocate them fairly, and perhaps most importantly, how to achieve this goal in a di ..."
Abstract

Cited by 66 (4 self)
 Add to MetaCart
(Show Context)
We study how protocol design for various functionalities within a communication network architecture can be viewed as a distributed resource allocation problem. This involves understanding what resources are, how to allocate them fairly, and perhaps most importantly, how to achieve this goal in a distributed and stable fashion. We start with ideas of a centralized optimization framework and show how congestion control, routing and scheduling in wired and wireless networks can be thought of as fair resource allocation. We then move to the study of controllers that allow a decentralized solution of this problem. These controllers are the analytical equivalent of protocols in use on the Internet today, and we describe existing protocols as realizations of such controllers. The Internet is a dynamic system with feedback delays and flows that arrive and depart, which means that stability of the system cannot be taken for granted. We show how to incorporate