Results 1  10
of
11
General Conditions for Bounded Relative Error in Simulations of Highly Reliable Markovian Systems
 Advances in Applied Probability
, 1996
"... We establish a necessary condition for any importance sampling scheme to give bounded relative error when estimating a performance measure of a highly reliable Markovian system. Also, a class of importance sampling methods is defined for which we prove a necessary and sufficient condition for bounde ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
We establish a necessary condition for any importance sampling scheme to give bounded relative error when estimating a performance measure of a highly reliable Markovian system. Also, a class of importance sampling methods is defined for which we prove a necessary and sufficient condition for bounded relative error for the performance measure estimator. This class of probability measures includes all of the currently existing failure biasing methods in the literature. Similar conditions for derivative estimators are established. SIMULATION; IMPORTANCE SAMPLING; LIKELIHOOD RATIOS; GRADIENT ESTIMATION; RELIABILITY; MARKOV CHAINS. AMS 1991 Subject Classifications: Primary: 65C05 Secondary: 60J10, 60K10 1 Introduction There is an increasing demand for systems, such as computing systems or transaction processing systems, to be highly reliable. A designer faced with developing such a system usually constructs and evaluates a mathematical model of the system to determine if it will perform...
Fast Simulation of Packet Loss Rates in a Shared Buffer Communications Switch
 ACM Transactions on Modeling and Computer Simulation
, 2001
"... This paper describes an efficient technique for estimating, via simulation, the probability of buffer overows in a queueing model that arises in the analysis of ATM (Asynchronous Transfer Mode) communication switches. There are multiple streams of (autocorrelated) traffic feeding the switch that has ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
This paper describes an efficient technique for estimating, via simulation, the probability of buffer overows in a queueing model that arises in the analysis of ATM (Asynchronous Transfer Mode) communication switches. There are multiple streams of (autocorrelated) traffic feeding the switch that has a buffer of finite capacity. Each stream is designated as either being of high or low priority. When the queue length reaches a certain threshold, only high priority packets are admitted to the switch's buffer. The problem is to estimate the loss rate of high priority packets. An asymptotically optimal importance sampling approach is developed for this rare event simulation problem. In this approach, the importance sampling is done in two distinct phases. In the first phase, an importance sampling change of measure is used to bring the queue length up to the threshold at which low priority packets get rejected. In the second phase, a different importance sampling change of measure is used to move the queue length from the threshold to the buffer capacity.
Importance Sampling For Large ATMType Queueing Networks
, 1996
"... We estimate, by simulation, the cellloss rate in an ATM switch modeled as a queueing network. Cell losses are rare events, so estimating their frequency by simulation is hard. We experiment with importance sampling as a mean of improving the simulation efficiency in that context. 1. INTRODUCTION ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
We estimate, by simulation, the cellloss rate in an ATM switch modeled as a queueing network. Cell losses are rare events, so estimating their frequency by simulation is hard. We experiment with importance sampling as a mean of improving the simulation efficiency in that context. 1. INTRODUCTION An Asynchronous Transfer Mode (ATM) communication switch can be modeled as a network of queues with finite buffer sizes. Cells (or packets) of information join the network in a stochastic manner and some may be lost due to buffer overflow. The longterm (or steadystate) fraction of cells that are lost at a given node is called the cellloss rate (CLR) at that node. Typical CLRs are small and the cell losses also tend to occur in bunches. They are therefore rare events, so estimating the CLRs with reasonable precision by straightforward simulation is extremely timeconsumingin some cases practically impossible. Efficiency improvement methods have been proposed to deal with such a situati...
An Environment For Importance Sampling Based On Stochastic Activity Networks
, 1994
"... Modelbased evaluation of reliable distributed and parallel systems is difficult due to the complexity of these systems and the nature of the dependability measures of interest. The complexity creates problems for analytical model solution techniques, and the fact that reliability and availability m ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
Modelbased evaluation of reliable distributed and parallel systems is difficult due to the complexity of these systems and the nature of the dependability measures of interest. The complexity creates problems for analytical model solution techniques, and the fact that reliability and availability measures are based on rare events makes traditional simulation methods inefficient. Importance sampling is a wellknown technique However, finding an importance sampling strategy that works well in general is a difficult problem. The best strategy for importance sampling depends on the characteristics of the system and the dependability measure of interest. This fact motivated the development of an environment for importance sampling that would support the wide variety of model characteristics and interesting measures. The environment is based on stochastic activity networks, and importance sampling strategies are specified using the new concept of the importance sampling governor. The governor supports dynamic importance sampling strategies by allowing the stochastic elements of the model to be redefined based on the evolution of the simulation. The utility of the new environment is demonstrated by evaluating the unreliability of a highly dependable faulttolerant unit used in the wellknown MARS architecture. The model is nonMarkovian, with Weibull distributed failure times and uniformly distributed repair times.
Techniques for the Fast Simulation of Models of Highly Dependable Systems
 IEEE Transactions on Reliability
, 2001
"... this paper, we review some of the importancesampling techniques that have been developed in recent years to e#ciently estimate dependability measures in Markovian and nonMarkovian models of highly dependable systems. 1 Acronyms MTTF Mean time to failure. MTBF Mean time between failures. CTMC ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
this paper, we review some of the importancesampling techniques that have been developed in recent years to e#ciently estimate dependability measures in Markovian and nonMarkovian models of highly dependable systems. 1 Acronyms MTTF Mean time to failure. MTBF Mean time between failures. CTMC Continuoustime Markov chain. DTMC Discretetime Markov chain. GSMP Generalized semiMarkov process. SAVE System AVailability Estimator. CLT Central limit theorem. VRR Variance reduction ratio. TRR Total e#ort reduction ratio. MSDIS Measurespecific dynamic importance sampling. BLBLR Balance over links balanced likelihood ratio. BLBLRC Balance over links balanced likelihood ratio with cuts. 1 INTRODUCTION High dependability requirements of today's critical and/or commercial systems often lead to complicated and costly designs. The ability to predict relevant dependability measures for such complex systems is essential, not only to guarantee hig
Estimation of Blocking Probabilities in Cellular Networks with Dynamic Channel Assignment
, 2002
"... this paper we study two regimes under which blocking is a rare event: low load and high cell capacity. Our simulations use the standard clock (SC) method. For low load, we propose a change of measure that we call static ISSC, which has bounded relative error. For high capacity, we use a change of me ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
this paper we study two regimes under which blocking is a rare event: low load and high cell capacity. Our simulations use the standard clock (SC) method. For low load, we propose a change of measure that we call static ISSC, which has bounded relative error. For high capacity, we use a change of measure that depends on the current state of the network occupancy. This is the dynamic ISSC method. We prove that this method yields zero variance estimators for single clique models, and we empirically show the advantages of this method over naive simulation for networks of moderate size and traffic loads.
The Balanced Likelihood Ratio Method for Estimating Performance Measures of Highly Reliable Systems
, 1998
"... Over the past several years importance sampling in conjunction with regenerative simulation has been presented as a promising method for estimating reliability parameters in highly reliable systems. Existing methods fail to provide benefits over crude Monte Carlo for the analysis of systems that con ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Over the past several years importance sampling in conjunction with regenerative simulation has been presented as a promising method for estimating reliability parameters in highly reliable systems. Existing methods fail to provide benefits over crude Monte Carlo for the analysis of systems that contain significant component redundancies. This paper presents refined importance sampling techniques along with a generalized regenerative model. The proposed methods have solid theoretical properties and work well in practice. 1
Regenerative Techniques for Estimating Performance Measures . . .
, 1997
"... Regenerative Techniques for Estimating Performance Measures of Highly Dependable Systems with Repairs Consider a system with N components which are subject to failures and repairs. Suppose that each component has a single operating state denoted by 1 and a single failed state denoted by 0. Let X i ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Regenerative Techniques for Estimating Performance Measures of Highly Dependable Systems with Repairs Consider a system with N components which are subject to failures and repairs. Suppose that each component has a single operating state denoted by 1 and a single failed state denoted by 0. Let X i (t) be the state of component i at time t and let X(t) = (X 1 (t); : : : ; XN (t)) be the state of the system. Associated with the system states x = (x 1 ; : : : ; xN ) 2 \Omega\Gamma where\Omega = f0; 1g N , is the structure function OE defined by OE(x) = ( 1 if the system operates in state x 0 if the system is failed in state x (For a review of definitions from reliability, see Barlow and Proschan [2]). Let 1 be the vector with all components equal to one and assume X(0) = 1. Now define the set F = fx 2\Omega : OE(x) = 0g of failure states and the following performance measures U = lim t!1 P [X(t) 2 F ]; T = E [inf(t : t ? 0; X(t) 2 F )] : U is the longrun system unavaila...
Estimating Small CellLoss Ratios in ATM Switches via Importance Sampling
"... this paper, importance sampling is applied to estimate the cellloss ratio in an ATM switch modeled as a queueing network fed by several sources emitting cells according to a Markovmodulated on/off process, and where all the cells from the same source have the same destination. The numerical experi ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
this paper, importance sampling is applied to estimate the cellloss ratio in an ATM switch modeled as a queueing network fed by several sources emitting cells according to a Markovmodulated on/off process, and where all the cells from the same source have the same destination. The numerical experiments show impressive efficiency improvements. Categories and Subject Descriptors: G.3 [Probability and Statistics]
Fast Simulation of Cellular Networks with Dynamic Channel Assignment
 ACM Transactions on Modeling and Computer Simulation
, 2002
"... Blocking probabilities in cellular mobile communication networks using dynamic channel assignment are hard to compute for realistic sized systems. This computational difficulty is due to the structure of the state space, which imposes strong coupling constraints amongst components of the occupancy v ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Blocking probabilities in cellular mobile communication networks using dynamic channel assignment are hard to compute for realistic sized systems. This computational difficulty is due to the structure of the state space, which imposes strong coupling constraints amongst components of the occupancy vector. Approximate tractable models have been proposed, which have product form stationary state distributions. However, for real channel assignment schemes, the product form is a poor approximation and it is necessary to simulate the actual occupancy process in order to estimate the blocking probabilities. Meaningful estimates of the blocking probability typically require an enormous amount of CPU time for simulation, since blocking events are usually rare. Advanced simulation approaches use importance sampling (IS) to overcome this problem. In this paper we study two regimes under which blocking is a rare event: low load and high cell capacity. Our simulations use the standard clock (SC) method. For low load, we propose a change of measure that we call static ISSC, which has bounded relative error. For high capacity, we use a change of measure that depends on the current state of the network occupancy. This is the dynamic ISSC method. We prove that this method yields zero variance estimators for single clique models, and we empirically show the advantages of this method over naïve simulation for networks of moderate size and traffic loads.