Results 11  20
of
259
RevenueMaximizing Pricing and Capacity Expansion in a ManyUsers Regime
, 2002
"... In this paper, we consider a network where each user is charged a fixed price per unit of bandwidth used, but where there is no congestiondependent pricing. However, the transmission rate of each user is assumed to be a function of network congestion (like TCP), and the price per unit bandwidth. We ..."
Abstract

Cited by 94 (7 self)
 Add to MetaCart
(Show Context)
In this paper, we consider a network where each user is charged a fixed price per unit of bandwidth used, but where there is no congestiondependent pricing. However, the transmission rate of each user is assumed to be a function of network congestion (like TCP), and the price per unit bandwidth. We are interested in answering the following question: how should the network choose the price to maximize its overall revenue? To obtain a tractable solution, we consider a single link accessed by many users where the capacity is increased in proportion to the number of users. We show the following result: as the number of users increases, the optimal priceperunitbandwidth charged by the service provider may increase or decrease depending upon the bandwidth of the link. However, for all values of the link capacity, the service provider's revenueperunitbandwidth increases and the overall performance of each user (measured in terms of a function of its throughput, the network congestion and the cost incurred by the user for bandwidth usage) improves. Since the revenue per unit bandwidth increases, it provides an incentive for the service provider to increase the available bandwidth in proportion to the number of users.
InterferenceAware Fair Rate Control in Wireless Sensor Networks
 In Proceedings of the ACM SIGCOMM
, 2006
"... In a wireless sensor network of N nodes transmitting data to
a single base station, possibly over multiple hops, what
distributed mechanisms should be implemented in order to
dynamically allocate fair and efficient transmission rates
to each node? Our interferenceaware fair rate control
(IFRC) dete ..."
Abstract

Cited by 92 (3 self)
 Add to MetaCart
(Show Context)
In a wireless sensor network of N nodes transmitting data to
a single base station, possibly over multiple hops, what
distributed mechanisms should be implemented in order to
dynamically allocate fair and efficient transmission rates
to each node? Our interferenceaware fair rate control
(IFRC) detects incipient congestion at a node by monitoring
the average queue length, communicates congestion state to
exactly the set of potential interferers using a novel
lowoverhead congestion sharing mechanism, and converges to
a fair and efficient rate using an AIMD control law. We
evaluate IFRC extensively on a 40node wireless sensor
network testbed. IFRC achieves a fair and efficient rate
allocation that is within 20 40% of the optimal fair rate
allocation on some network topologies. Its rate adaptation
mechanism is highly effective: we did not observe a single
instance of queue overflow in our many experiments.
Finally, IFRC can be extended easily to support situations
where only a subset of the nodes transmit, where the network
has multiple base stations, or where nodes are assigned
different transmission weights.In a wireless sensor network
of N nodes transmitting data to a single base station,
possibly over multiple hops, what distributed mechanisms
should be implemented in order to dynamically allocate fair
and efficient transmission rates to each node? Our
interferenceaware fair rate control (IFRC) detects
incipient congestion at a node by monitoring the average
queue length, communicates congestion state to exactly the
set of potential interferers using a novel lowoverhead
congestion sharing mechanism, and converges to a fair and
efficient rate using an AIMD control law. We evaluate IFRC
extensively on a 40node wireless sensor network testbed.
IFRC achieves a fair and efficient rate allocation that is
within 20 40% of the optimal fair rate allocation on some
network topologies. Its rate adaptation mechanism is highly
effective: we did not observe a single instance of queue
overflow in our many experiments. Finally, IFRC can be
extended easily to support situations where only a subset of
the nodes transmit, where the network has multiple base
stations, or where nodes are assigned different transmission
weights.
Global Stability of Congestion Controllers for the Internet
 IEEE Transactions on Automatic Control
, 2002
"... We consider a single link accessed by a single source which responds to congestion signals from the network. The design of controllers for such sources in the presence of feedback delay has received much attention recently. Here we present conditions for the global, asymptotic stability and semi ..."
Abstract

Cited by 76 (6 self)
 Add to MetaCart
(Show Context)
We consider a single link accessed by a single source which responds to congestion signals from the network. The design of controllers for such sources in the presence of feedback delay has received much attention recently. Here we present conditions for the global, asymptotic stability and semiglobal exponential stability of congestion controllers which are natural extensions of earlier linearized analysis of such systems. Our result on exponential stability provides the missing link in the proof of how one obtains a single deterministic congestion control equation from a system with many congestioncontrolled sources and random disturbances. Using numerical examples, we compare the conditions on the congestioncontrol parameters obtained using local and global stability analysis.
A timescale decomposition approach to adaptive ECN marking,” presented at the INFOCOM 2001
, 2001
"... endorsement of any of the University of Pennsylvania's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution m ..."
Abstract

Cited by 76 (24 self)
 Add to MetaCart
(Show Context)
endorsement of any of the University of Pennsylvania's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to pubspermissions@ieee.org. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.
A traffic characterization of popular online games
 IEEE/ACM Transactions on Networking
, 2005
"... Abstract—This paper describes the results of the first comprehensive analysis of a range of popular online, multiplayer, game servers. The results show that the traffic behavior of these servers is highly predictable and can be attributed to the fact that current game designs target the saturation ..."
Abstract

Cited by 73 (1 self)
 Add to MetaCart
Abstract—This paper describes the results of the first comprehensive analysis of a range of popular online, multiplayer, game servers. The results show that the traffic behavior of these servers is highly predictable and can be attributed to the fact that current game designs target the saturation of the narrowest, lastmile link. Specifically, in order to maximize the interactivity of the game itself and to provide relatively uniform experiences between players playing over different network speeds, online games typically fix their usage requirements in such a way as to saturate the network link of their lowest speed players. While the traffic observed is highly predictable, the traces also indicate that these online games provide significant challenges to current network infrastructure. As a result of synchronous game logic requiring an extreme amount of interactivity, a close look at the trace reveals the presence of large, highly periodic, bursts of small packets. With such stringent demands on interactivity, routers must be designed with enough capacity to quickly route such bursts without delay. Index Terms—Communication system traffic, games, measurement, network servers, networks.
Stochastic Hybrid Systems: Application to Communication Networks
 in Hybrid Systems: Computation and Control, ser. Lect. Notes in Comput. Science
, 2004
"... Abstract. We propose a model for Stochastic Hybrid Systems (SHSs) where transitions between discrete modes are triggered by stochastic events much like transitions between states of a continuoustime Markov chains. However, the rate at which transitions occur is allowed to depend both on the continu ..."
Abstract

Cited by 68 (14 self)
 Add to MetaCart
Abstract. We propose a model for Stochastic Hybrid Systems (SHSs) where transitions between discrete modes are triggered by stochastic events much like transitions between states of a continuoustime Markov chains. However, the rate at which transitions occur is allowed to depend both on the continuous and the discrete states of the SHS. Based on results available for PiecewiseDeterministic Markov Process (PDPs), we provide a formula for the extended generator of the SHS, which can be used to compute expectations and the overall distribution of the state. As an application, we construct a stochastic model for onoff TCP flows that considers both the congestionavoidance and slowstart modes and takes directly into account the distribution of the number of bytes transmitted. Using the tools derived for SHSs, we model the dynamics of the moments of the sending rate by an infinite system of ODEs, which can be truncated to obtain an approximate finitedimensional model. This model shows that, for transfersize distributions reported in the literature, the standard deviation of the sending rate is much larger than its average. Moreover, the later seems to vary little with the probability of packet drop. This has significant implications for the design of congestion control mechanisms. 1
One More Bit Is Enough
 in Proceedings of ACM SIGCOMM
, 2005
"... Achieving efficient and fair bandwidth allocation while minimizing packet loss and bottleneck queue in high bandwidthdelay product networks has long been a daunting challenge. Existing endtoend congestion control (e.g., TCP) and traditional congestion notification schemes (e.g., TCP+AQM/ ECN) have ..."
Abstract

Cited by 67 (1 self)
 Add to MetaCart
(Show Context)
Achieving efficient and fair bandwidth allocation while minimizing packet loss and bottleneck queue in high bandwidthdelay product networks has long been a daunting challenge. Existing endtoend congestion control (e.g., TCP) and traditional congestion notification schemes (e.g., TCP+AQM/ ECN) have significant limitations in achieving this goal. While the XCP protocol addresses this challenge, it requires multiple bits to encode the congestionrelated information exchanged between routers and endhosts. Unfortunately, there is no space in the IP header for these bits, and solving this problem involves a nontrivial and timeconsuming standardization process. In this paper, we design and implement a simple, lowcomplexity protocol, called Variablestructure congestion Control Protocol (VCP), that leverages only the existing two ECN bits for network congestion feedback, and yet achieves comparable performance to XCP, i.e., high utilization, negligible packet loss rate, low persistent queue length, and reasonable fairness. On the downside, VCP converges significantly slower to a fair allocation than XCP. We evaluate the performance of VCP using extensive ns2 simulations over a wide range of network scenarios and find that it significantly outperforms many recentlyproposed TCP variants, such as HSTCP, FAST, and CUBIC. To gain insight into the behavior of VCP, we analyze a simplified fluid model and prove its global stability for the case of a single bottleneck shared by synchronous flows with identical roundtrip times. 1.
Network optimization and control
 Foundations and Trends in Networking
"... We study how protocol design for various functionalities within a communication network architecture can be viewed as a distributed resource allocation problem. This involves understanding what resources are, how to allocate them fairly, and perhaps most importantly, how to achieve this goal in a di ..."
Abstract

Cited by 66 (4 self)
 Add to MetaCart
(Show Context)
We study how protocol design for various functionalities within a communication network architecture can be viewed as a distributed resource allocation problem. This involves understanding what resources are, how to allocate them fairly, and perhaps most importantly, how to achieve this goal in a distributed and stable fashion. We start with ideas of a centralized optimization framework and show how congestion control, routing and scheduling in wired and wireless networks can be thought of as fair resource allocation. We then move to the study of controllers that allow a decentralized solution of this problem. These controllers are the analytical equivalent of protocols in use on the Internet today, and we describe existing protocols as realizations of such controllers. The Internet is a dynamic system with feedback delays and flows that arrive and depart, which means that stability of the system cannot be taken for granted. We show how to incorporate
Theories and Models for Internet Quality of Service
, 2002
"... We survey recent advances in theories and models for Internet Quality of Service (QoS). We start with the theory of network calculus, which lays the foundation for support of deterministic performance guarantees in networks, and illustrate its applications to integrated services, differentiated serv ..."
Abstract

Cited by 64 (1 self)
 Add to MetaCart
We survey recent advances in theories and models for Internet Quality of Service (QoS). We start with the theory of network calculus, which lays the foundation for support of deterministic performance guarantees in networks, and illustrate its applications to integrated services, differentiated services, and streaming media playback delays. We also present mechanisms and architecture for scalable support of guaranteed services in the Internet, based on the concept of a stateless core. Methods for scalable control operations are also briefly discussed. We then turn our attention to statistical performance guarantees, and describe several new probabilistic results that can be used for a statistical dimensioning of differentiated services. Lastly, we review recent proposals and results in supporting performance guarantees in a best effort context. These include models for elastic throughput guarantees based on TCP performance modeling, techniques for some quality of service differentiation without access control, and methods that allow an application to control the performance it receives, in the absence of network support.
Global Stability of Internet Congestion Controllers with Heterogeneous Delays
, 2004
"... In this paper, we study the problem of designing globally stable, scalable congestion control algorithms for the Internet. Prior work has primarily used linear stability as the criterion for such a design. Global stability has been studied only for single node, single source problems. Here, we obtai ..."
Abstract

Cited by 57 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we study the problem of designing globally stable, scalable congestion control algorithms for the Internet. Prior work has primarily used linear stability as the criterion for such a design. Global stability has been studied only for single node, single source problems. Here, we obtain conditions for a general topology network accessed by sources with heterogeneous delays. We obtain a sufficient condition for global stability in terms of the increase/decrease parameters of the congestion control algorithm and the price functions used at the links.