Results 1  10
of
28
Effective Bandwidths for Multiclass Markov Fluids and Other ATM Sources
, 1993
"... We show the existence of effective bandwidths for multiclass Markov fluids and other types of sources that are used to model ATM traffic. More precisely,we show that when such sources share a buffer with deterministic service rate, a constraint on the tail of the buffer occupancy distribution is a l ..."
Abstract

Cited by 187 (14 self)
 Add to MetaCart
We show the existence of effective bandwidths for multiclass Markov fluids and other types of sources that are used to model ATM traffic. More precisely,we show that when such sources share a buffer with deterministic service rate, a constraint on the tail of the buffer occupancy distribution is a linear constraint on the number of sources. That is, for a small loss probability one can assume that each source transmits at a fixed rate called its effective bandwidth. When traffic parameters are known, effective bandwidths can be calculated and may be used to obtain a circuitswitched style call acceptance and routing algorithm for ATM networks. The important feature of the effective bandwidth of a source is that it is a characteristic of that source and the acceptable loss probability only.Thus, the effective bandwidth of a source does not depend on the number of sources sharing the buffer nor on the model parameters of other types of sources sharing the buffer.
Large Deviations, the Shape of the Loss Curve, and Economies of Scale in Large Multiplexers
, 1995
"... We analyse the queue Q L at a multiplexer with L inputs. We obtain a large deviation result, namely that under very general conditions lim L!1 L \Gamma1 log P[Q L ? Lb] = \GammaI (b) provided the offered load is held constant, where the shape function I is expressed in terms of the cumulant ..."
Abstract

Cited by 114 (11 self)
 Add to MetaCart
We analyse the queue Q L at a multiplexer with L inputs. We obtain a large deviation result, namely that under very general conditions lim L!1 L \Gamma1 log P[Q L ? Lb] = \GammaI (b) provided the offered load is held constant, where the shape function I is expressed in terms of the cumulant generating functions of the input traffic. This provides an improvement on the usual effective bandwidth approximation P[Q L ? b] e \Gammaffib , replacing it with P[Q L ? b] e \GammaLI(b=L) . The difference I(b) \Gamma ffi b determines the economies of scale which are to be obtained in large multiplexers. If the limit = \Gamma lim t!1 t t (ffi) exists (here t is the finite time cumulant of the workload process) then lim b!1 (I(b) \Gamma ffi b) = . We apply this idea to a number of examples of arrivals processes: heterogeneous superpositions, Gaussian processes, Markovian additive processes and Poisson processes. We obtain expressions for in these cases. is zero for independent arrivals, but positive for arrivals with positive correlations. Thus economies of scale are obtainable for highly bursty traffic expected in ATM multiplexing.
Effective Bandwidth and Fast Simulation of ATM Intree Networks
, 1992
"... We consider the efficient estimation, via simulation, of very low buffer overflow probabilities in certain acyclic ATM queueing networks. We apply the theory of effective bandwidths and Markov additive processes to derive an asymptotically optimal simulation scheme for estimating such probabilities ..."
Abstract

Cited by 49 (13 self)
 Add to MetaCart
We consider the efficient estimation, via simulation, of very low buffer overflow probabilities in certain acyclic ATM queueing networks. We apply the theory of effective bandwidths and Markov additive processes to derive an asymptotically optimal simulation scheme for estimating such probabilities for a single queue with multiple independent sources, each of which may be either a Markov modulated process or an autoregressive processes. This result extends earlier work on queues with either independent arrivals or with a single Markov modulated arrival source. The results are then extended to estimating loss probabilities for intree networks of such queues. Experimental results show that the method can provide many orders of magnitude reduction in variance in complex queueing systems that are not amenable to analysis.
Sample Path Large Deviations and Intree Networks
 Queueing Systems
, 1994
"... Using the contraction principle, in this paper we derive a set of closure properties for sample path large deviations. These properties include sum, reduction, composition and reflection mapping. Using these properties, we show that the exponential decay rates of the steady state queue length distri ..."
Abstract

Cited by 40 (8 self)
 Add to MetaCart
Using the contraction principle, in this paper we derive a set of closure properties for sample path large deviations. These properties include sum, reduction, composition and reflection mapping. Using these properties, we show that the exponential decay rates of the steady state queue length distributions in an intree network with routing can be derived by a set of recursive equations. The solution of this set of equations is related to the recently developed theory of effective bandwidth for high speed digital networks, especially ATM networks. We also prove a conditional limit theorem that illustrates how a queue builds up in an intree network.
Spectral theory and limit theorems for geometrically ergodic Markov processes. Part II: Empirical measures & unbounded functionals
, 2001
"... Consider the partial sums {St} of a realvalued functional F(�(t)) of a Markov chain {�(t)} with values in a general state space. Assuming only that the Markov chain is geometrically ergodic and that the functional F is bounded, the following conclusions are obtained: Spectral theory. Wellbehaved s ..."
Abstract

Cited by 37 (15 self)
 Add to MetaCart
Consider the partial sums {St} of a realvalued functional F(�(t)) of a Markov chain {�(t)} with values in a general state space. Assuming only that the Markov chain is geometrically ergodic and that the functional F is bounded, the following conclusions are obtained: Spectral theory. Wellbehaved solutions fˇ can be constructed for the “multiplicative Poisson equation ” (eαF P) f ˇ = λf ˇ,wherePis the transition kernel of the Markov chain and α ∈ C is a constant. The function fˇ is an eigenfunction, with corresponding eigenvalue λ, for the kernel (eαF P) = eαF(x) P(x,dy). A “multiplicative ” mean ergodic theorem. For all complex α in a neighborhood of the origin, the normalized mean of exp(αSt) (and not the logarithm of the mean) converges to fˇ exponentially fast, where fˇ is a solution of the multiplicative Poisson equation. Edgeworth expansions. Rates are obtained for the convergence of the distribution function of the normalized partial sums St to the standard Gaussian distribution. The first term in this expansion is of order (1 / √ t) and it depends on the initial condition of the Markov chain through the solution ̂ F of the associated Poisson equation (and not the solution fˇ of the multiplicative Poisson equation). Large deviations. The partial sums are shown to satisfy a large deviations principle in a neighborhood of the mean. This result, proved under geometric ergodicity alone, cannot in general be extended to the whole real line. Exact large deviations asymptotics. Rates of convergence are obtained for the large deviations estimates above. The polynomial preexponent is of order (1 / √ t) and its coefficient depends on the initial condition of the Markov chain through the solution fˇ of the multiplicative Poisson equation. Extensions of these results to continuoustime Markov processes are also given. 1. Introduction. Consider a Markov process � ={�(t): t ∈ T
Exponential Bounds with Applications to Call Admission
, 1996
"... In this paper we develop a framework for computing upper and lower bounds of an exponential form for a large class of single resource systems with Markov additive inputs. Specifically, the bounds are on quantities such as backlog, queue length, and response time. Explicit or computable expressions f ..."
Abstract

Cited by 21 (10 self)
 Add to MetaCart
In this paper we develop a framework for computing upper and lower bounds of an exponential form for a large class of single resource systems with Markov additive inputs. Specifically, the bounds are on quantities such as backlog, queue length, and response time. Explicit or computable expressions for our bounds are given in the context of queueing theory and numerical comparisons with other bounds are presented. The paper concludes with two applications to admission control in multimedia systems. Keywords: Tail distribution; Exponential bound; Large deviation principle; Ergodicity; Markov chain; Matrix analysis; Queues; Markov additive process; Effective bandwidth; Call admission control. P. Nain was supported in part by NSF under grant NCR9116183. This work was done when this author was visiting the University of Massachusetts in Amherst during the academic year 199394. y D. Towsley was supported in part by NSF under grant NCR9116183. 0 1 Introduction We are witnessing a ph...
Large deviations asymptotics and the spectral theory of multiplicatively regular Markov processes
 Electron. J. Probab
"... In this paper we continue the investigation of the spectral theory and exponential asymptotics of primarily discretetime Markov processes, following Kontoyiannis and Meyn [32]. We introduce a new family of nonlinear Lyapunov drift criteria, which characterize distinct subclasses of geometrically er ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
In this paper we continue the investigation of the spectral theory and exponential asymptotics of primarily discretetime Markov processes, following Kontoyiannis and Meyn [32]. We introduce a new family of nonlinear Lyapunov drift criteria, which characterize distinct subclasses of geometrically ergodic Markov processes in terms of simple inequalities for the nonlinear generator. We concentrate primarily on the class of multiplicatively regular Markov processes, which are characterized via simple conditions similar to (but weaker than) those of DonskerVaradhan. For any such process Φ = {Φ(t)} with transition kernel P on a general state space X, the following are obtained. Spectral Theory: For a large class of (possibly unbounded) functionals F: X → C, the kernel ̂ P (x, dy) = e F (x) P (x, dy) has a discrete spectrum in an appropriately defined Banach space. It follows that there exists a “maximal ” solution (λ, ˇ f) to the multiplicative Poisson equation, defined as the eigenvalue problem ̂ P ˇ f = λ ˇ f. The functional Λ(F) = log(λ) is convex, smooth, and its convex dual Λ ∗ is convex, with compact sublevel sets.
Machine learning and nonparametric bandit theory
 IEEE Trans. Automat. Contr
, 1995
"... AbstructIn its most basic form, bandit theory is concerned that asymptotically the relative number of times the best arm with the design problem of sequentially choosing members Rom is chosen converges a.s. to one, assuming only independence a given collection of random varisbk 80 ulat the regret, ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
AbstructIn its most basic form, bandit theory is concerned that asymptotically the relative number of times the best arm with the design problem of sequentially choosing members Rom is chosen converges a.s. to one, assuming only independence a given collection of random varisbk 80 ulat the regret, i.e., and finite first moments. R, = C,(/L * p,)ET,(j), grows ~8 lowly 98 p~~~ible with increasing n. Here p, is the expected value of the bandit arm (i.e., The present work follows Robbins ’ nonpyametrichonrandom vrrrlable) indexed by j, T, (j) is the number of times arm Bayesian formulation, but drops the independence assumption. jhasbeenselecteainthefirstndeeiskxrstqges,amlp * = sup,p,. It employs concepts established in the i.i.d. analysis by Lai and The present paper coatribotes to the theory by considering the Robbins [4], who proposed a strategy for ICarmed parametric situation in wbich observations are dependent. To begin with, the bandits which achieves a regret growth, with number n of dependency is presumed to depend only on past observations of the same ann, but Mer, we allow that it may be with decision epochs, of respect to the entire past pml that the set of arm is infinite. This brings queues nod, more generally, controlled Markov proasses (ck + o ( 1)) log (1.1) into our ~UFV~W. Thus OUT “blackbox ” methodology is sultable for the case when the ody observables are cast values and, Furthermore, they established that growth rate (1.1) is a lower in particulpr, the probability structure and 1 function are bound for any strategy for which the regret is of order o(na), unknown to the desigaer. The conelbn of the aaalysis is that any a> 0, uniformly over all parameters. That is, the under lenient conditions, using dgorithms prescribed herein, regret rate in (1.1) cannot be improved, even by Bayesian risk growtb hi conrmensurate with that in the simplest i.i.d. strategies, even for a twoarmed Bernoulli bandit, and even if cases. Our methods repreaent an alternative to receat stochasticapproximatbdperturhtionanalysis iderrs for tuning queues. the probability of one of the arms is specified. Thus it may be surprising that in Section 11, we show that without making I.
Dynamic importance sampling for uniformly recurrent markov chains
 Annals of Applied Probability
, 2005
"... Importance sampling is a variance reduction technique for efficient estimation of rareevent probabilities by Monte Carlo. In standard importance sampling schemes, the system is simulated using an a priori fixed change of measure suggested by a large deviation lower bound analysis. Recent work, howe ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
Importance sampling is a variance reduction technique for efficient estimation of rareevent probabilities by Monte Carlo. In standard importance sampling schemes, the system is simulated using an a priori fixed change of measure suggested by a large deviation lower bound analysis. Recent work, however, has suggested that such schemes do not work well in many situations. In this paper we consider dynamic importance sampling in the setting of uniformly recurrent Markov chains. By “dynamic ” we mean that in the course of a single simulation, the change of measure can depend on the outcome of the simulation up till that time. Based on a controltheoretic approach to large deviations, the existence of asymptotically optimal dynamic schemes is demonstrated in great generality. The implementation of the dynamic schemes is carried out with the help of a limiting Bellman equation. Numerical examples are presented to contrast the dynamic and standard schemes. 1. Introduction. Among
Importance sampling techniques for the multidimensional ruin problem for general Markov additive sequences of random vectors
 ANN. APPL. PROBAB
, 2002
"... Let {(Xn, Sn) : n = 0, 1,...} be a Markov additive process, where {Xn} is a Markov chain on a general state space and Sn is an additive component on R d. We consider P {Sn ∈ A/ɛ, some n} as ɛ → 0, where A ⊂ R d is open and the mean drift of {Sn} is away from A. Our main objective is to study the sim ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Let {(Xn, Sn) : n = 0, 1,...} be a Markov additive process, where {Xn} is a Markov chain on a general state space and Sn is an additive component on R d. We consider P {Sn ∈ A/ɛ, some n} as ɛ → 0, where A ⊂ R d is open and the mean drift of {Sn} is away from A. Our main objective is to study the simulation of P {Sn ∈ A/ɛ, some n} using the Monte Carlo technique of importance sampling. If the set A is convex, then we establish: (i) the precise dependence (as ɛ → 0) of the estimator variance on the choice of the simulation distribution; (ii) the existence of a unique simulation distribution which is efficient and optimal in the asymptotic sense of Siegmund (1976). We then extend our techniques to the case where A is not convex. Our results lead to positive conclusions which complement the multidimensional counterexamples of Glasserman and Wang (1997).