Results 1  10
of
16
Simulation run lengths to estimate blocking probabilities
 ACM Transactions on Modelling and Computer Simulation
, 1996
"... We derive formulas approximating the asymptotic variance of four estimators for the steadystate blocking probability in a multiserver loss system, exploiting diffusion process limits. These formulas can be used to predict simulation run lengths required to obtain desired statistical precision befor ..."
Abstract

Cited by 24 (19 self)
 Add to MetaCart
We derive formulas approximating the asymptotic variance of four estimators for the steadystate blocking probability in a multiserver loss system, exploiting diffusion process limits. These formulas can be used to predict simulation run lengths required to obtain desired statistical precision before the simulation has been run, which can aid in the design of simulation experiments. They also indicate that one estimator can be much better than another, depending on the loading. An indirect estimator based on estimating the mean occupancy is significantly more (less) efficient than a direct estimator for heavy (light) loads. A major concern is the way computational effort scales with system size. For all the estimators, the asymptotic variance tends to be inversely proportional to the system size, so that the computational effort (regarded as proportional to the product of the asymptotic variance and the arrival rate) does not grow as system size increases. Indeed, holding the blocking probability fixed, the computational effort with a good estimator decreases to 0 as the system size increases. The asymptotic variance formulas also reveal the impact of the arrivalprocess and servicetime variability on the statistical precision. We validate these formulas by comparing them to exact numerical
Efficiency Improvement And Variance Reduction
, 1994
"... We give an overview of the main techniques for improving the statistical efficiency of simulation estimators. Efficiency improvement is typically (but not always) achieved through variance reduction. We discuss methods such as common random numbers, antithetic variates, control variates, importance ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
We give an overview of the main techniques for improving the statistical efficiency of simulation estimators. Efficiency improvement is typically (but not always) achieved through variance reduction. We discuss methods such as common random numbers, antithetic variates, control variates, importance sampling, conditional Monte Carlo, stratified sampling, and some others, as well as the combination of certain of those methods. We also survey the recent literature on this topic.
Variance reduction in simulation of loss models
 Operations Research
, 1999
"... We propose a new estimator of steadystate blocking probabilities for simulations of stochastic loss models that can be much more efficient than the natural estimator (ratio of losses to arrivals). The proposed estimator is a convex combination of the natural estimator and an indirect estimator base ..."
Abstract

Cited by 10 (8 self)
 Add to MetaCart
We propose a new estimator of steadystate blocking probabilities for simulations of stochastic loss models that can be much more efficient than the natural estimator (ratio of losses to arrivals). The proposed estimator is a convex combination of the natural estimator and an indirect estimator based on the average number of customers in service, obtained from Little’s law (L = λW). It exploits the known offered load (product of the arrival rate and the mean service time). The variance reduction is dramatic when the blocking probability is high and the service times are highly variable. The advantage of the combination estimator in this regime is partly due to the indirect estimator, which itself is much more efficient than the natural estimator in this regime, and partly due to strong correlation (most often negative) between the natural and indirect estimators. In general, when the variances of two component estimators are very different, the variance reduction from the optimal convex combination is about 1 − ρ 2, where ρ is the correlation between the component estimators. For loss models, the variances of the natural and indirect estimators are very different under both light and heavy loads. The combination estimator is effective for estimating multiple blocking probabilities in loss networks with multiple traffic classes, some of which are in normal
Realtime delay estimation based on delay history
, 2007
"... Motivated by interest in making delay announcements to arriving customers who must wait in call centers and related service systems, we study the performance of alternative realtime delay estimators based on recent customer delay experience. The main estimators considered are: (i) the delay of the ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Motivated by interest in making delay announcements to arriving customers who must wait in call centers and related service systems, we study the performance of alternative realtime delay estimators based on recent customer delay experience. The main estimators considered are: (i) the delay of the last customer to enter service (LES), (ii) the delay experienced so far by the customer at the head of the line (HOL), and (iii) the delay experienced by the customer to have arrived most recently among those who have already completed service (RCS). We compare these delayhistory estimators to the estimator based on the queue length (QL), which requires knowledge of the mean interval between successive service completions in addition to the queue length. We characterize performance by the mean squared error (MSE). We do analysis and conduct simulations for the standard GI/M/s multiserver queueing model, emphasizing the case of large s. We obtain analytical results for the conditional distribution of the delay given the observed HOL delay. An approximation to its mean value serves as a refined estimator. For all three candidate delay estimators, the MSE relative to the square of the mean is asymptotically negligible in the manyserver and classical heavytraffic limiting regimes.
Large Sample Properties of Weighted Monte Carlo Estimators
 Working Paper DRO200207, Columbia Business School
, 2003
"... A general approach to improving simulation accuracy uses information about auxiliary control variables with known expected values to improve the estimation of unknown quantities. We analyze weighted Monte Carlo estimators that implement this idea by applying weights to independent replications. The ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
A general approach to improving simulation accuracy uses information about auxiliary control variables with known expected values to improve the estimation of unknown quantities. We analyze weighted Monte Carlo estimators that implement this idea by applying weights to independent replications. The weights are chosen to constrain the weighted averages of the control variables. We distinguish two cases (unbiased and biased) depending on whether the weighted averages of the controls are constrained to equal their expected values or some other values. In both cases, the number of constraints is usually smaller than the number of replications, so there may be many feasible weights. We select maximally uniform weights by minimizing a separable convex function of the weights subject to the control variable constraints. Estimators of this form arise (sometimes implicitly) in several settings, including at least two in finance: calibrating a model to market data (as in work of Avellaneda et al.) and calculating conditional expectations in order to price American options. We analyze properties of these estimators as the number of replications increases. We show that in the unbiased case, weighted Monte Carlo reduces variance and that all convex objective functions within a large class produce estimators that are very close to each other in a strong sense. In contrast, in the biased case the choice of objective function does matter. We show explicitly how the choice of objective determines the limit to which the estimator converges.
Stationary IPA Estimates For NonSmooth Functions Of The GI/G/1/∞Workload
, 1992
"... We give stationary estimates for the derivative of the expectation of a nonsmooth function of bounded variation f of the workload in a GI/G/1/∞ queue, with respect to a parameter influencing the distribution of the input process. For this, we use an idea of Konstantopoulos and Zazanis [12] based on ..."
Abstract
 Add to MetaCart
We give stationary estimates for the derivative of the expectation of a nonsmooth function of bounded variation f of the workload in a GI/G/1/∞ queue, with respect to a parameter influencing the distribution of the input process. For this, we use an idea of Konstantopoulos and Zazanis [12] based on the Palm inversion formula, however avoiding a limiting argument by performing the levelcrossing analysis thereof globally, via Fubini's theorem. This method of proof allows to treat the case where the workload distribution has a mass at discontinuities of f and where the formula of [12] has to be modified. The case where the parameter is the speed of service or/and the time scale factor of the input process is also treated using the same approach.
On the FCLT Version of L = λW
"... The functional central limit theorem (FCLT) version of Little’s law (L = λW) shows that the fundamental relation between cumulative processes underlying L = λW leads to a corresponding relation among the limits for the FCLTscaled stochastic processes. It supports statistical analysis, e.g., estimat ..."
Abstract
 Add to MetaCart
The functional central limit theorem (FCLT) version of Little’s law (L = λW) shows that the fundamental relation between cumulative processes underlying L = λW leads to a corresponding relation among the limits for the FCLTscaled stochastic processes. It supports statistical analysis, e.g., estimating confidence intervals. Here, this statistical motivation is reviewed and then the FCLT in Glynn and Whitt (1986) is extended to show that a bivariate FCLT for the number in system and the waiting times implies a FCLT for the arrival process, and thus the joint FCLT for all processes. The new result is based on a converse to the preservation of convergence by the composition map with centering, exploiting monotonicity, which should have other applications. Keywords: Little’s law, L = λW, central limit theorem, functional central limit theorem, confidence intervals, confidence intervals based on L = λW, continuous mapping theorem, inverse map, composition map, 2000 MSC: 60K25, 90B22, 90B22 1.
Extending the FCLT version of L = λW
"... The functional central limit theorem (FCLT) version of Little’s law (L = λW) established by Glynn and Whitt is extended to show that a bivariate FCLT for the number in system and the waiting times implies the joint FCLT for all processes. It is based on a converse to the preservation of convergence ..."
Abstract
 Add to MetaCart
The functional central limit theorem (FCLT) version of Little’s law (L = λW) established by Glynn and Whitt is extended to show that a bivariate FCLT for the number in system and the waiting times implies the joint FCLT for all processes. It is based on a converse to the preservation of convergence by the composition map with centering on the function space containing the sample paths, exploiting monotonicity. Keywords: Little’s law, L = λW, functional central limit theorem, confidence intervals, continuous mapping theorem, composition with centering 1.