Results 1  10
of
131
Stability, queue length and delay of deterministic and stochastic queueing networks
 IEEE Transactions on Automatic Control
, 1994
"... Motivated by recent development in high speed networks, in this paper we study two types of stability problems: (i) conditions for queueing networks that render bounded queue lengths and bounded delay for customers, and (ii) conditions for queueing networks in which the queue length distribution of ..."
Abstract

Cited by 171 (20 self)
 Add to MetaCart
Motivated by recent development in high speed networks, in this paper we study two types of stability problems: (i) conditions for queueing networks that render bounded queue lengths and bounded delay for customers, and (ii) conditions for queueing networks in which the queue length distribution of a queue has an exponential tail with rate `. To answer these two types of stability problems, we introduce two new notions of traffic characterization: minimum envelope rate (MER) and minimum envelope rate with respect to `. Based on these two new notions of traffic characterization, we develop a set of rules for network operations such as superposition, inputoutput relation of a single queue, and routing. Specifically, we show that (i) the MER of a superposition process is less than or equal to the sum of the MER of each process, (ii) a queue is stable in the sense of bounded queue length if the MER of the input traffic is smaller than the capacity, (iii) the MER of a departure process from a stable queue is less than or equal to that of the input process (iv) the MER of a routed process from a departure process is less than or equal to the MER of the departure process multiplied by the MER of the routing process. Similar results hold for MER with respect to ` under a further assumption of independence. These rules provide a natural way to analyze feedforward networks with multiple classes of customers. For single class networks with nonfeedforward routing, we provide a new method to show that similar stability results hold for such networks under the FCFS policy. Moreover, when restricting to the family of twostate Markov modulated arrival processes, the notion of MER with respect to ` is shown to be
Bandwidth Sharing and Admission Control for Elastic Traffic
 Telecommunication Systems
, 1998
"... We consider the performance of a network like the Internet handling socalled elastic traffic where the rate of flows adjusts to fill available bandwidth. Realized throughput depends both on the way bandwidth is shared and on the random nature of traffic. We assume traffic consists of point to point ..."
Abstract

Cited by 160 (15 self)
 Add to MetaCart
We consider the performance of a network like the Internet handling socalled elastic traffic where the rate of flows adjusts to fill available bandwidth. Realized throughput depends both on the way bandwidth is shared and on the random nature of traffic. We assume traffic consists of point to point transfers of individual documents of finite size arriving according to a Poisson process. Notable results are that weighted sharing has limited impact on perceived quality of service and that discrimination in favour of short documents leads to considerably better performance than fair sharing. In a linear network, maxmin fairness is preferable to proportional fairness under random traffic while the converse is true under the assumption of a static configuration of persistent flows. Admission control is advocated as a necessary means to maintain goodput in case of traffic overload. 1 Introduction Traffic in a multiservice network is essentially composed of individual transactions or flows...
Load Balanced Birkhoffvon Neumann Switches, Part II: Multistage Buffering
, 2001
"... The main objective of this sequel is to solve the outofsequence problem that occurs in the load balanced Birkhoffvon Neumann switch with onestage buffering. We do this by adding a loadbalancing buffer in front of the first stage and a resequencingandoutput buffer after the second stage. Moreo ..."
Abstract

Cited by 103 (13 self)
 Add to MetaCart
The main objective of this sequel is to solve the outofsequence problem that occurs in the load balanced Birkhoffvon Neumann switch with onestage buffering. We do this by adding a loadbalancing buffer in front of the first stage and a resequencingandoutput buffer after the second stage. Moreover, packets are distributed at the first stage according to their flows, instead of their arrival times in Part I. In this paper, we consider multicasting ows with two types of scheduling policies: the First Come First Served (FCFS) policy and the Earliest Deadline First (EDF) policy. The FCFS policy requires a jitter control mechanism in front of the second stage to ensure proper ordering of the traffic entering the second stage. For the EDF scheme, there is no need for jitter control. It uses the departure times of the corresponding FCFS outputbuffered switch as deadlines and schedules packets according to their deadlines. For both policies, we show that the endtoend delay through our multistage switch is bounded above by the sum of the delay from the corresponding FCFS outputbuffered switch and a constant that only depends on the size of the switch and the number of multicasting flows supported by the switch.
Analytic Evaluation of RED Performance
, 2000
"... Endtoend congestion control mechanisms such as those in TCP are not enough to prevent congestion collapse in the Internet (for starters, not all applications might be willing to use them), and they must be supplemented by control mechanisms inside the network. The IRTF has singled out Random Early ..."
Abstract

Cited by 86 (1 self)
 Add to MetaCart
Endtoend congestion control mechanisms such as those in TCP are not enough to prevent congestion collapse in the Internet (for starters, not all applications might be willing to use them), and they must be supplemented by control mechanisms inside the network. The IRTF has singled out Random Early Detection (RED) as one queue management scheme recommended for rapid deployment throughout the Internet. However, RED is not a thoroughly understood scheme  witness for example how the recommended parameter settings, or even the various benefits RED is claimed to provide, have changed over the past few years. In this paper, we describe simple analytic models for RED, and use these models to quantify the benefits (or lack thereof) brought about by RED. In particular, we examine the impact of RED on the loss and delay suffered by bursty and less bursty traffic (such as TCP and UDP traffic, respectively) . We find that (i) RED does eliminate the higher loss bias against bursty traffic obser...
A minimum cost heterogeneous sensor network with a lifetime constraint
 IEEE Transactions on Mobile Computing
, 2005
"... Abstract—We consider a heterogeneous sensor network in which nodes are to be deployed over a unit area for the purpose of surveillance. An aircraft visits the area periodically and gathers data about the activity in the area from the sensor nodes. There are two types of nodes that are distributed ov ..."
Abstract

Cited by 68 (1 self)
 Add to MetaCart
Abstract—We consider a heterogeneous sensor network in which nodes are to be deployed over a unit area for the purpose of surveillance. An aircraft visits the area periodically and gathers data about the activity in the area from the sensor nodes. There are two types of nodes that are distributed over the area using twodimensional homogeneous Poisson point processes; type 0 nodes with intensity (average number per unit area) 0 and battery energy E0; and type 1 nodes with intensity 1 and battery energy E1. Type 0 nodes do the sensing while type 1 nodes act as the cluster heads besides doing the sensing. Nodes use multihopping to communicate with their closest cluster heads. We determine the optimum node intensities ( 0, 1) and node energies (E0, E1) that guarantee a lifetime of at least T units, while ensuring connectivity and coverage of the surveillance area with a high probability. We minimize the overall cost of the network under these constraints. Lifetime is defined as the number of successful data gathering trips (or cycles) that are possible until connectivity and/or coverage are lost. Conditions for a sharp cutoff are also taken into account, i.e., we ensure that almost all the nodes run out of energy at about the same time so that there is very little energy waste due to residual energy. We compare the results for random deployment with those of a grid deployment in which nodes are placed deterministically along grid p ffiffiffiffiffi points. We observe that in both cases 1 scales approximately as 0. Our results can be directly extended to take into account unreliable nodes.
Effective Bandwidth and Fast Simulation of ATM Intree Networks
, 1992
"... We consider the efficient estimation, via simulation, of very low buffer overflow probabilities in certain acyclic ATM queueing networks. We apply the theory of effective bandwidths and Markov additive processes to derive an asymptotically optimal simulation scheme for estimating such probabilities ..."
Abstract

Cited by 49 (13 self)
 Add to MetaCart
We consider the efficient estimation, via simulation, of very low buffer overflow probabilities in certain acyclic ATM queueing networks. We apply the theory of effective bandwidths and Markov additive processes to derive an asymptotically optimal simulation scheme for estimating such probabilities for a single queue with multiple independent sources, each of which may be either a Markov modulated process or an autoregressive processes. This result extends earlier work on queues with either independent arrivals or with a single Markov modulated arrival source. The results are then extended to estimating loss probabilities for intree networks of such queues. Experimental results show that the method can provide many orders of magnitude reduction in variance in complex queueing systems that are not amenable to analysis.
Queueing dynamics and maximal throughput scheduling in switched processing systems. Queueing systems
"... Abstract. We study a processing system comprised of parallel queues, whose individual service rates are specified by a global service mode (configuration). The issue is how to switch the system between various possible service modes, so as to maximize its throughput and maintain stability under the ..."
Abstract

Cited by 45 (13 self)
 Add to MetaCart
Abstract. We study a processing system comprised of parallel queues, whose individual service rates are specified by a global service mode (configuration). The issue is how to switch the system between various possible service modes, so as to maximize its throughput and maintain stability under the most workloadintensive input traffic traces (arrival processes). Stability preserves the job inflow–outflow balance at each queue on the traffic traces. Two key families of service policies are shown to maximize throughput, under the mild condition that traffic traces have longterm average workload rates. In the first family of cone policies, the service mode is chosen based on the system backlog state belonging to a corresponding cone. Two distinct policy classes of that nature are investigated, MaxProduct and FastEmpty. In the second family of batch policies (BatchAdapt), jobs are collectively scheduled over adaptively chosen horizons, according to an asymptotically optimal, robust schedule. The issues of nonpreemptive job processing and nonnegligible switching times between service modes are addressed. The analysis is extended to cover feedforward networks of such processing systems/nodes. The approach taken unifies and generalizes prior studies, by developing a general tracebased modeling framework (samplepath approach) for addressing the queueing stability problem. It treats the queueing structure as a deterministic dynamical system and analyzes directly its evolution trajectories. It does not require any probabilistic superstructure, which is
Sample Path Large Deviations and Intree Networks
 Queueing Systems
, 1994
"... Using the contraction principle, in this paper we derive a set of closure properties for sample path large deviations. These properties include sum, reduction, composition and reflection mapping. Using these properties, we show that the exponential decay rates of the steady state queue length distri ..."
Abstract

Cited by 40 (8 self)
 Add to MetaCart
Using the contraction principle, in this paper we derive a set of closure properties for sample path large deviations. These properties include sum, reduction, composition and reflection mapping. Using these properties, we show that the exponential decay rates of the steady state queue length distributions in an intree network with routing can be derived by a set of recursive equations. The solution of this set of equations is related to the recently developed theory of effective bandwidth for high speed digital networks, especially ATM networks. We also prove a conditional limit theorem that illustrates how a queue builds up in an intree network.
Pushtopeer videoondemand system: Design and evaluation
 In UMass Computer Science Techincal Report 2006–59
, 2006
"... Number: CRPRL2006110001 ..."
AIMD, Fairness and Fractal Scaling of TCP Traffic
 in Proceedings of IEEE INFOCOM
, 2002
"... We propose a natural and simple model for the joint throughput evolution of a set of TCP sessions sharing a common tail drop bottleneck router, via products of random matrices. This model allows one to predict the fluctuations of the throughput of each session, as a function of the synchronization r ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
We propose a natural and simple model for the joint throughput evolution of a set of TCP sessions sharing a common tail drop bottleneck router, via products of random matrices. This model allows one to predict the fluctuations of the throughput of each session, as a function of the synchronization rate in the bottleneck router; several other and more refined properties of the protocol are analyzed such as the instantaneous imbalance between sessions, the autocorrelation function or the performance degradation due to synchronization of losses. When aggregating traffic obtained from this model, one obtains, for certain ranges of the parameters, short time scale statistical properties that are consistent with a fractal scaling similar to what was identified on real traces using wavelets.