Results 1  10
of
26
Efficiency Improvement And Variance Reduction
, 1994
"... We give an overview of the main techniques for improving the statistical efficiency of simulation estimators. Efficiency improvement is typically (but not always) achieved through variance reduction. We discuss methods such as common random numbers, antithetic variates, control variates, importance ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
We give an overview of the main techniques for improving the statistical efficiency of simulation estimators. Efficiency improvement is typically (but not always) achieved through variance reduction. We discuss methods such as common random numbers, antithetic variates, control variates, importance sampling, conditional Monte Carlo, stratified sampling, and some others, as well as the combination of certain of those methods. We also survey the recent literature on this topic.
Simulation run lengths to estimate blocking probabilities
 ACM Transactions on Modelling and Computer Simulation
, 1996
"... We derive formulas approximating the asymptotic variance of four estimators for the steadystate blocking probability in a multiserver loss system, exploiting diffusion process limits. These formulas can be used to predict simulation run lengths required to obtain desired statistical precision befor ..."
Abstract

Cited by 29 (20 self)
 Add to MetaCart
We derive formulas approximating the asymptotic variance of four estimators for the steadystate blocking probability in a multiserver loss system, exploiting diffusion process limits. These formulas can be used to predict simulation run lengths required to obtain desired statistical precision before the simulation has been run, which can aid in the design of simulation experiments. They also indicate that one estimator can be much better than another, depending on the loading. An indirect estimator based on estimating the mean occupancy is significantly more (less) efficient than a direct estimator for heavy (light) loads. A major concern is the way computational effort scales with system size. For all the estimators, the asymptotic variance tends to be inversely proportional to the system size, so that the computational effort (regarded as proportional to the product of the asymptotic variance and the arrival rate) does not grow as system size increases. Indeed, holding the blocking probability fixed, the computational effort with a good estimator decreases to 0 as the system size increases. The asymptotic variance formulas also reveal the impact of the arrivalprocess and servicetime variability on the statistical precision. We validate these formulas by comparing them to exact numerical
Realtime delay estimation based on delay history
, 2007
"... Motivated by interest in making delay announcements to arriving customers who must wait in call centers and related service systems, we study the performance of alternative realtime delay estimators based on recent customer delay experience. The main estimators considered are: (i) the delay of the ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
Motivated by interest in making delay announcements to arriving customers who must wait in call centers and related service systems, we study the performance of alternative realtime delay estimators based on recent customer delay experience. The main estimators considered are: (i) the delay of the last customer to enter service (LES), (ii) the delay experienced so far by the customer at the head of the line (HOL), and (iii) the delay experienced by the customer to have arrived most recently among those who have already completed service (RCS). We compare these delayhistory estimators to the estimator based on the queue length (QL), which requires knowledge of the mean interval between successive service completions in addition to the queue length. We characterize performance by the mean squared error (MSE). We do analysis and conduct simulations for the standard GI/M/s multiserver queueing model, emphasizing the case of large s. We obtain analytical results for the conditional distribution of the delay given the observed HOL delay. An approximation to its mean value serves as a refined estimator. For all three candidate delay estimators, the MSE relative to the square of the mean is asymptotically negligible in the manyserver and classical heavytraffic limiting regimes.
Variance reduction in simulation of loss models
 Operations Research
, 1999
"... We propose a new estimator of steadystate blocking probabilities for simulations of stochastic loss models that can be much more efficient than the natural estimator (ratio of losses to arrivals). The proposed estimator is a convex combination of the natural estimator and an indirect estimator base ..."
Abstract

Cited by 10 (8 self)
 Add to MetaCart
We propose a new estimator of steadystate blocking probabilities for simulations of stochastic loss models that can be much more efficient than the natural estimator (ratio of losses to arrivals). The proposed estimator is a convex combination of the natural estimator and an indirect estimator based on the average number of customers in service, obtained from Little’s law (L = λW). It exploits the known offered load (product of the arrival rate and the mean service time). The variance reduction is dramatic when the blocking probability is high and the service times are highly variable. The advantage of the combination estimator in this regime is partly due to the indirect estimator, which itself is much more efficient than the natural estimator in this regime, and partly due to strong correlation (most often negative) between the natural and indirect estimators. In general, when the variances of two component estimators are very different, the variance reduction from the optimal convex combination is about 1 − ρ 2, where ρ is the correlation between the component estimators. For loss models, the variances of the natural and indirect estimators are very different under both light and heavy loads. The combination estimator is effective for estimating multiple blocking probabilities in loss networks with multiple traffic classes, some of which are in normal
Large Sample Properties of Weighted Monte Carlo Estimators
 Working Paper DRO200207, Columbia Business School
, 2003
"... A general approach to improving simulation accuracy uses information about auxiliary control variables with known expected values to improve the estimation of unknown quantities. We analyze weighted Monte Carlo estimators that implement this idea by applying weights to independent replications. The ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
A general approach to improving simulation accuracy uses information about auxiliary control variables with known expected values to improve the estimation of unknown quantities. We analyze weighted Monte Carlo estimators that implement this idea by applying weights to independent replications. The weights are chosen to constrain the weighted averages of the control variables. We distinguish two cases (unbiased and biased) depending on whether the weighted averages of the controls are constrained to equal their expected values or some other values. In both cases, the number of constraints is usually smaller than the number of replications, so there may be many feasible weights. We select maximally uniform weights by minimizing a separable convex function of the weights subject to the control variable constraints. Estimators of this form arise (sometimes implicitly) in several settings, including at least two in finance: calibrating a model to market data (as in work of Avellaneda et al.) and calculating conditional expectations in order to price American options. We analyze properties of these estimators as the number of replications increases. We show that in the unbiased case, weighted Monte Carlo reduces variance and that all convex objective functions within a large class produce estimators that are very close to each other in a strong sense. In contrast, in the biased case the choice of objective function does matter. We show explicitly how the choice of objective determines the limit to which the estimator converges.
Stationary IPA Estimates For NonSmooth Functions Of The GI/G/1/∞Workload
, 1992
"... We give stationary estimates for the derivative of the expectation of a nonsmooth function of bounded variation f of the workload in a GI/G/1/∞ queue, with respect to a parameter influencing the distribution of the input process. For this, we use an idea of Konstantopoulos and Zazanis [12] based on ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We give stationary estimates for the derivative of the expectation of a nonsmooth function of bounded variation f of the workload in a GI/G/1/∞ queue, with respect to a parameter influencing the distribution of the input process. For this, we use an idea of Konstantopoulos and Zazanis [12] based on the Palm inversion formula, however avoiding a limiting argument by performing the levelcrossing analysis thereof globally, via Fubini's theorem. This method of proof allows to treat the case where the workload distribution has a mass at discontinuities of f and where the formula of [12] has to be modified. The case where the parameter is the speed of service or/and the time scale factor of the input process is also treated using the same approach.
Parameter and State Estimation in Queues and Related Stochastic Models: A Bibliography’, http://www.maths.uq.edu.au/∼pkp/papers/ Qest/Qest.html
, 2011
"... This is an annotated bibliography on estimation and inference results for queues and related stochastic models. The purpose of this document is to collect and categorise works in the field, allowing for researchers and practitioners to explore the various types of results that exist. This bibliogra ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
This is an annotated bibliography on estimation and inference results for queues and related stochastic models. The purpose of this document is to collect and categorise works in the field, allowing for researchers and practitioners to explore the various types of results that exist. This bibliography attempts to include all known works that satisfy both of these requirements: • Works that deal with queueing models. • Works that contain contributions related to methodology of parameter estimation, state estimation, hypothesis testing, confidence interval and/or actual datasets of application areas. It also includes reference to selected additional related material in Section 8. There are additional works not mentioned in this bibliography that are mildly related. This includes methods for parameter estimation of point processes, methods for parameter estimation of stochastic matrix analytic models as well as inference, estimation and tomography of communication networks not directly modelled as queueing networks. Our attempt is to make this bibliography exhaustive, yet there are possibly some papers that we have missed. As it is updated continuously, additions and comments are welcomed. The sections below categorise the works based on several categories. A single paper may appear in several categories simultaneously. The final section lists all works in chronological order along with short descriptions of the contributions. This bibliography is maintained at
Stationary IPA Estimates for NonSmooth G/G/1/∞ Functionals via Palm Inversion and LevelCrossing Analysis
, 1993
"... We give stationary estimates for the derivative of the expectation of a nonsmooth function of bounded variation f of the workload in a G/G/1/∞ queue, with respect to a parameter influencing the distribution of the input process. For this, we use an idea of Konstantopoulos and Zazanis [15] based on ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We give stationary estimates for the derivative of the expectation of a nonsmooth function of bounded variation f of the workload in a G/G/1/∞ queue, with respect to a parameter influencing the distribution of the input process. For this, we use an idea of Konstantopoulos and Zazanis [15] based on the Palm inversion formula, however avoiding a limiting argument by performing the levelcrossing analysis thereof globally, via Fubini's theorem. This method of proof allows to treat the case where the workload distribution has a mass at discontinuities of f and where the formula of [15] has to be modified. The case where the parameter is the speed of service or/and the time scale factor of the input process is also treated using the same approach.
Extending the FCLT version of L = λW
"... The functional central limit theorem (FCLT) version of Little’s law (L = λW) established by Glynn and Whitt is extended to show that a bivariate FCLT for the number in system and the waiting times implies the joint FCLT for all processes. It is based on a converse to the preservation of convergence ..."
Abstract
 Add to MetaCart
(Show Context)
The functional central limit theorem (FCLT) version of Little’s law (L = λW) established by Glynn and Whitt is extended to show that a bivariate FCLT for the number in system and the waiting times implies the joint FCLT for all processes. It is based on a converse to the preservation of convergence by the composition map with centering on the function space containing the sample paths, exploiting monotonicity. Keywords: Little’s law, L = λW, functional central limit theorem, confidence intervals, continuous mapping theorem, composition with centering 1.