Results 1  10
of
64
Stability and Performance Analysis of Networks Supporting Elastic Services
 IEEE/ACM Transactions on Networking
, 2001
"... AbstractWe consider the stability and performance of a model for networks supporting services that adapt their transmission to the available bandwidth. Not unlike real networks, in our model, connection arrivals are stochastic, each has a random amount of data to send, and the number of ongoing co ..."
Abstract

Cited by 87 (6 self)
 Add to MetaCart
AbstractWe consider the stability and performance of a model for networks supporting services that adapt their transmission to the available bandwidth. Not unlike real networks, in our model, connection arrivals are stochastic, each has a random amount of data to send, and the number of ongoing connections in the system changes over time. Consequently, the bandwidth allocated to, or throughput achieved by, a given connection may change during its lifetime as feedback control mechanisms react to network loads. Ideally, if there were a fixed number of ongoing connections, such feedback mechanisms would reach an equilibrium bandwidth allocation typically characterized in terms of its "fairness " to users, e.g., maxmin or proportionally fair. In this paper we prove the stability of such networks when the offered load on each link does not exceed its capacity. We use simulation to investigate performance, in terms of average connection delays, for various fairness criteria. Finally, we pose an architectural problem in TCP/IPs decoupling of the transport and network layer from the point of view of guaranteeing connectionlevel stability, which we claim may explain congestion phenomena on the Internet. Index TermsABR service, bandwidth allocation, Lyapunov functions, performance analysis, proportional fairness, rate control, stability, TCP/IP, weighted maxmin fairness. F I.
Ergodicity for SDEs and approximations: locally Lipschitz vector fields and degenerate noise. Stochastic Process
 Appl
"... The ergodic properties of SDEs, and various time discretizations for SDEs, are studied. The ergodicity of SDEs is established by using techniques from the theory of Markov chains on general state spaces. Application of these Markov chain results leads to straightforward proofs of ergodicity for a va ..."
Abstract

Cited by 52 (14 self)
 Add to MetaCart
The ergodic properties of SDEs, and various time discretizations for SDEs, are studied. The ergodicity of SDEs is established by using techniques from the theory of Markov chains on general state spaces. Application of these Markov chain results leads to straightforward proofs of ergodicity for a variety of SDEs, in particular for problems with degenerate noise and for problems with locally Lipschitz vector fields. The key points which need to be verified are the existence of a Lyapunov function inducing returns to a compact set, a uniformly reachable point from within that set, and some smoothness of the probability densities; the last two points imply a minorization condition. Together the minorization condition and Lyapunov structure give geometric ergodicity. Applications include the Langevin equation, the Lorenz equation with degenerate noise and gradient systems. The ergodic theorems proved are strong, yielding exponential convergence of expectations for classes of measurable functions restricted only by the condition that they grow no faster than the Lyapunov function. The same Markov chain theory is then used to study timediscrete approximations of these SDEs. It is shown that the minorization condition is robust under approximation. For globally Lipschitz vector fields this is also true of the Lyapunov condition. However in the locally Lipschitz case the Lyapunov
Throughput and fairness guarantees through maximal scheduling in wireless networks
 IEEE Transactions on Information Theory
, 2008
"... We address the question of providing throughput guarantees through distributed scheduling, which has remained an open problem for some time. We consider a simple distributed scheduling strategy, maximal scheduling, and prove that it attains a guaranteed fraction of the maximum throughput region in a ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
We address the question of providing throughput guarantees through distributed scheduling, which has remained an open problem for some time. We consider a simple distributed scheduling strategy, maximal scheduling, and prove that it attains a guaranteed fraction of the maximum throughput region in arbitrary wireless networks. The guaranteed fraction depends on the “interference degree ” of the network, which is the maximum number of transmitterreceiver pairs that interfere with any given transmitterreceiver pair in the network and do not interfere with each other. Depending on the nature of communication, the transmission powers and the propagation models, the guaranteed fraction can be lower bounded by the maximum link degrees in the underlying topology, or even by constants that are independent of the topology. We prove that the guarantees are tight in that they can not be improved any further with maximal scheduling. Our results can also be generalized to endtoend multihop sessions. Finally, we enhance maximal scheduling to guarantee fairness of rate allocation among different sessions. I.
The natural workstealing algorithm is stable
 In Proceedings of the 42nd IEEE Symposium on Foundations of Computer Science (FOCS
, 2001
"... In this paper we analyse a very simple dynamic workstealing algorithm. In the workgeneration model, there are n (work) generators. A generatorallocation function is simply a function from the n generators to the n processors. We consider a fixed, but arbitrary, distribution D over generatoralloca ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
In this paper we analyse a very simple dynamic workstealing algorithm. In the workgeneration model, there are n (work) generators. A generatorallocation function is simply a function from the n generators to the n processors. We consider a fixed, but arbitrary, distribution D over generatorallocation functions. During each timestep of our process, a generatorallocation function h is chosen from D, and the generators are allocated to the processors according to h. Each generator may then generate a unittime task which it inserts into the queue of its host processor. It generates such a task independently with probability λ. After the new tasks are generated, each processor removes one task from its queue and services it. For many choices of D, the workgeneration model allows the load to become arbitrarily imbalanced, even when λ < 1. For example, D could be the point distribution containing a single function h which allocates all of the generators to just one processor. For this choice of D, the chosen processor receives around λn units of work at each step and services one. The natural workstealing algorithm that we analyse is widely used in practical applications and works as follows. During each time step, each empty
Walks in the quarter plane: Kreweras’ algebraic model
, 2004
"... We consider planar lattice walks that start from (0, 0), remain in the first quadrant i, j ≥ 0, and are made of three types of steps: NorthEast, West and South. These walks are known to have remarkable enumerative and probabilistic properties: – they are counted by nice numbers (Kreweras 1965), – t ..."
Abstract

Cited by 22 (7 self)
 Add to MetaCart
We consider planar lattice walks that start from (0, 0), remain in the first quadrant i, j ≥ 0, and are made of three types of steps: NorthEast, West and South. These walks are known to have remarkable enumerative and probabilistic properties: – they are counted by nice numbers (Kreweras 1965), – the generating function of these numbers is algebraic (Gessel 1986), – the stationary distribution of the corresponding Markov chain in the quadrant has an algebraic probability generating function (Flatto and Hahn 1984). These results are not well understood, and have been established via complicated proofs. Here we give a uniform derivation of all of them, which is more elementary that those previously published. We then go further by computing the full law of the Markov chain. This helps to delimit the border of algebraicity: the associated probability generating function is no longer algebraic, unless a diagonal symmetry holds. Our proofs are based on the solution of certain functional equations, which are very simple to establish. Finding purely combinatorial proofs remains an open problem.
Arbitrary Throughput Versus Complexity Tradeoffs in Wireless Networks using Graph Partitioning
, 2007
"... Several policies have recently been proposed for attaining the maximum throughput region, or a guaranteed fraction thereof, through dynamic link scheduling. Among these policies, the ones that attain the maximum throughput region require a computation time which is linear in the network size, and t ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
Several policies have recently been proposed for attaining the maximum throughput region, or a guaranteed fraction thereof, through dynamic link scheduling. Among these policies, the ones that attain the maximum throughput region require a computation time which is linear in the network size, and the ones that require constant or logarithmic computation time attain only certain fractions of the maximum throughput region. In contrast, in this paper we propose policies that can attain any desirable fraction of the maximum throughput region using a computation time that is largely independent of the network size. First, using a combination of graph partitioning techniques and lyapunov arguments, we propose a simple policy for tree topologies under the primary interference model that requires each link to exchange only 1 bit information with its adjacent links and approximates the maximum throughput region using a computation time that depends only on the maximum degree of nodes and the approximation factor. Then we develop a framework for attaining arbitrary close approximations for the maximum throughput region in arbitrary networks, and use this framework to obtain any desired tradeoff between throughput guarantees and computation times for a large class of networks and interference models. Specifically, given any ɛ> 0, the maximum throughput region can be approximated in these networks within a factor of 1 − ɛ using a computation time that depends only on the maximum node degree and ɛ.
Fairness in MIMD Congestion Control Algorithms
, 2005
"... The Multiplicative Increase Multiplicative Decrease (MIMD) congestion control algorithm in the form of Scalable TCP has been proposed for high speed networks. We study fairness among sessions sharing a common bottleneck link, where one or more sessions use the MIMD algorithm. Losses, or congestion ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
The Multiplicative Increase Multiplicative Decrease (MIMD) congestion control algorithm in the form of Scalable TCP has been proposed for high speed networks. We study fairness among sessions sharing a common bottleneck link, where one or more sessions use the MIMD algorithm. Losses, or congestion signals, occur when the capacity is reached but could also be initiated before that. Both synchronous as well as asynchronous losses are considered. In the asynchronous case, only one session suffers a loss at a loss instant. Two models are then considered to determine which source looses a packet: a rate dependent model in which the packet loss probability of a session is proportional to its rate at the congestion instant, and the independent loss rate model. We first study how two MIMD sessions share the capacity in the presence of general combinations of synchronous and asynchronous losses. We show that, in the presence of rate dependent losses, the capacity is fairly shared whereas rate independent losses provide high unfairness. We then study inter protocol fairness: how the capacity is shared in the presence of synchronous losses among sessions some of which use Additive Increase Multiplicative Decrease (AIMD) protocols whereas the others use MIMD protocols.
Spectral gaps in Wasserstein distances and the 2D stochastic NavierStokes equations
, 2006
"... We develop a general method that allows to show the existence of spectral gaps for Markov semigroups on Banach spaces. Unlike most previous work, the type of norm we consider for this analysis is neither a weighted supremum norm nor an Ł ptype norm, but involves the derivative of the observable as ..."
Abstract

Cited by 16 (8 self)
 Add to MetaCart
We develop a general method that allows to show the existence of spectral gaps for Markov semigroups on Banach spaces. Unlike most previous work, the type of norm we consider for this analysis is neither a weighted supremum norm nor an Ł ptype norm, but involves the derivative of the observable as well and hence can be seen as a type of 1–Wasserstein distance. This turns out to be a suitable approach for infinitedimensional spaces where the usual Harris or Doeblin conditions, which are geared to total variation convergence, regularly fail to hold. In the first part of this paper, we consider semigroups that have uniform behaviour which one can view as an extension of Doeblin’s condition. We then proceed to study situations where the behaviour is not so uniform, but the system has a suitable Lyapunov structure, leading to a type of Harris condition. We finally show that the latter condition is satisfied by the twodimensional stochastic NavierStokers equations, even in situations where the forcing is extremely degenerate. Using the convergence result, we show shat the stochastic NavierStokes equations ’ invariant measures depend continuously on the viscosity and the structure of the forcing. 1
An Optimal Data Propagation Algorithm for Maximizing the Lifespan of Sensor Networks”, in DCOSS 2006. c○ 2008 by the authors; licensee Molecular Diversity Preservation International
"... Abstract. We consider the problem of data propagation in wireless sensor networks and revisit the family of mixed strategy routing schemes. We show that maximizing the lifespan, balancing the energy among individual sensors and maximizing the message flow in the network are equivalent. We propose a ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
Abstract. We consider the problem of data propagation in wireless sensor networks and revisit the family of mixed strategy routing schemes. We show that maximizing the lifespan, balancing the energy among individual sensors and maximizing the message flow in the network are equivalent. We propose a distributed and adaptive data propagation algorithm for balancing the energy among sensors in the network. The mixed routing algorithm we propose allows each sensor node to either send a message to one of its immediate neighbors, or to send it directly to the base station, the decision being based on a potential function depending on its remaining energy. By considering a simple model of the network and using a linear programming description of the message flow, we prove the strong result that an energybalanced mixed strategy beats every other possible routing strategy in terms of lifespan maximization. Moreover, we provide sufficient conditions for ensuring the dynamic stability of the algorithm. The algorithm is inspired by the gradientbased routing scheme but by allowing to send messages directly to the base station we improve considerably the lifespan of the network. As a matter of fact, we show experimentally that our algorithm is close to optimal and that it even beats the best centralized multihop routing strategy. 1
Stable scheduling policies for maximizing throughput in generalized constrained queueing systems
 IN PROCEEDINGS OF IEEE INFOCOM
, 2006
"... We consider a class of queueing networks referred to as “generalized constrained queueing networks” which form the basis of several different communication networks and information systems. These networks consist of a collection of queues such that only certain sets of queues can be concurrently ser ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
We consider a class of queueing networks referred to as “generalized constrained queueing networks” which form the basis of several different communication networks and information systems. These networks consist of a collection of queues such that only certain sets of queues can be concurrently served. Whenever a queue is served, the system receives a certain reward. Different rewards are obtained for serving different queues, and furthermore, the reward obtained for serving a queue depends on the set of concurrently served queues. We demonstrate that the dependence of the rewards on the schedules alter fundamental relations between performance metrics like throughput and stability. Specifically, maximizing the throughput is no longer equivalent to maximizing the stability region; we therefore need to maximize one subject to certain constraints on the other. Since stability is critical for bounding packet delays and buffer overflow, we focus on maximizing the throughput subject to stabilizing the system. We design provably optimal scheduling strategies that attain this goal by scheduling the queues for service based on the queue lengths and the rewards provided by different selections. The proposed scheduling strategies are however computationally complex. We subsequently develop techniques to reduce the complexity and yet attain the same throughput and stability region. We demonstrate that our framework is general enough to accommodate random rewards and random scheduling constraints.