Results 1  10
of
23
Markov chain monte carlo convergence diagnostics
 JASA
, 1996
"... A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise ..."
Abstract

Cited by 338 (6 self)
 Add to MetaCart
A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise for the future but currently has yielded relatively little that is of practical use in applied work. Consequently, most MCMC users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. After giving a brief overview of the area, we provide an expository review of thirteen convergence diagnostics, describing the theoretical basis and practical implementation of each. We then compare their performance in two simple models and conclude that all the methods can fail to detect the sorts of convergence failure they were designed to identify. We thus recommend a combination of strategies aimed at evaluating and accelerating MCMC sampler convergence, including applying diagnostic procedures to a small number of parallel chains, monitoring autocorrelations and crosscorrelations, and modifying parameterizations or sampling algorithms appropriately. We emphasize, however, that it is not possible to say with certainty that a finite sample from an MCMC algorithm is representative of an underlying stationary distribution. 1
Advanced methods for simulation output analysis
 Proc. of the 1998 Winter Simulation Conference
, 1998
"... ..."
Sequential Estimation of Quantiles
, 1998
"... Quantiles are convenient measures of the entire range of values of simulation outputs. However, unlike the mean and standard deviation, the observations have to be stored since calculation of quantiles requires several passes through the data. Thus, quantile estimation (QE) requires a large amount o ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Quantiles are convenient measures of the entire range of values of simulation outputs. However, unlike the mean and standard deviation, the observations have to be stored since calculation of quantiles requires several passes through the data. Thus, quantile estimation (QE) requires a large amount of computer storage and computation time. Several approaches for estimating quantiles in RS (regenerative simulation) and nonRS, which can avoid the difficulties of QE, have been proposed in [Igl76], [Sei82b], and [JC85].
The Impact Of Transients On Simulation Variance Estimators
 PROCEEDINGS OF THE 1997 WINTER SIMULATION CONFERENCE:234239
, 1997
"... Given a stationary simulation process with unknown mean , interest frequently lies in, and various methods exist for, developing estimates and confidence intervals for . Typically, the sample mean is used as the point estimate for . It is also useful to estimate the variance parameter, s 2 , a mea ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Given a stationary simulation process with unknown mean , interest frequently lies in, and various methods exist for, developing estimates and confidence intervals for . Typically, the sample mean is used as the point estimate for . It is also useful to estimate the variance parameter, s 2 , a measure of sample mean's precision. While there are many methods for estimating the variance parameter for such processes, they usually assume that the process has reached steady state before data collection begins. If this is not the case, then transient behavior can have a significant impact on the estimates of and s 2 . We present
A Methodology for Initialisation Bias Reduction in Computer Simulation Output
, 1992
"... We present a new methodology for detecting when the steady state behaviour of a discretetime stochastic process has been approached after starting from a nontypical initial state. The main application and motivation for this method is the determination of a suitable truncation point for reduction ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We present a new methodology for detecting when the steady state behaviour of a discretetime stochastic process has been approached after starting from a nontypical initial state. The main application and motivation for this method is the determination of a suitable truncation point for reduction of initialisation bias in computer simulation output. An implementation of this methodology is given which is most powerful for an exponential initial transient. Keywords: computer simulation, initialization bias This paper is based on the first author's Masters thesis at the Royal Melbourne Institute of Technology, Melbourne, Australia. y Centre for Signal Processing Research, QueenslandUniversity of Technology, Brisbane, Australia. Email: zsmajackway@qut.edu.au z Royal Melbourne Institute of Technology, Melbourne, Australia. 1 Introduction In computer simulation studies an estimate is often required for the steadystate mean of some quantity of interest. For each computer run the ...
A Comparison Of Five SteadyState Truncation Heuristics For Simulation
, 2000
"... We compare the performance of five wellknown truncation heuristics for mitigating the effects of initialization bias in the output analysis of steadystate simulations. Two of these rules are variants of the MSER heuristic studied by White (1997); the remaining rules are adaptations of biasdetecti ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We compare the performance of five wellknown truncation heuristics for mitigating the effects of initialization bias in the output analysis of steadystate simulations. Two of these rules are variants of the MSER heuristic studied by White (1997); the remaining rules are adaptations of biasdetection tests based on the seminal work of Schruben (1982). Each heuristic was tested in each of a 168 different experiments. Each experiment comprised multiple tests on different realizations of the sample path of a secondorder autoregressive process with known (deterministic) bias function. Different experiments employed alternative process parameters, generating a range of damped and underdamped stochastic responses. These were combined with alternative damped, underdamped, and mean shift bias functions. The performance of each rule was evaluated based on the ability of the rule to remove bias from the mean estimator for the steadystate process. Results confirmed that four of the five rules were effective and reliable, consistently yielding truncated sequences with reduced bias. In general, the MSER heuristics outperformed the three rules based on bias detection, with Spratts (1998) MSER5 the most effective and robust choice for a generalpurpose method.
Rigorous Analysis of (Distributed) Simulation Results
 IEEE Transactions on Software Engineering
, 1989
"... Formal static analysis of the correctness and complexity of scalable and adaptive algorithms for distributed systems is difficult and often not appropriate. Rather, tool support is required to facilitate the 'trial and error' approach which is often adopted. Simulation supports this experi ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Formal static analysis of the correctness and complexity of scalable and adaptive algorithms for distributed systems is difficult and often not appropriate. Rather, tool support is required to facilitate the 'trial and error' approach which is often adopted. Simulation supports this experimental approach well. In this paper we discuss the need for a rigorous approach to simulation results analysis and model validation. These aspects are often neglected in simulation studies, particularly in distributed simulation. Our aim is to provide the practitioner with a set of guidelines which can be used as a `recipe' in different simulation environments, making sound techniques (simulation and statistics) accessible to users. We demonstrate use of the suggested analysis method with two different distributed simulators (CNCSIM [8]) and (NEST[3]) thus illustrating its generality. The same guidelines may be used with other simulation tools to ensure meaningful results while obviating the need to a...
Reducing The Variance Of Cycle Times In Semiconductor Manufacturing Systems
 In International Conference on Improving Manufacturing Performance in a Distributed Enterprise: Advanced Systems and Tools
, 1995
"... : In semiconductor manufacturing, due to rework and reentrant flow, overtaking of wafers can occur. The effect of overtaking is that cycle times at successive service centers are not mutually independent. As far as the distribution of cycle times is concerned, only higher moments are affected, the ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
: In semiconductor manufacturing, due to rework and reentrant flow, overtaking of wafers can occur. The effect of overtaking is that cycle times at successive service centers are not mutually independent. As far as the distribution of cycle times is concerned, only higher moments are affected, the mean cycle time remaining unchanged by the influence of overtaking. Further, in the literature, it is conjectured that variance of cycle times increases when overtaking increases. Taking into account this conjecture, we attempt at reducing the variability of cycle times by diminishing the magnitude of overtaking. This can be done by reversing the overtaking through appropriate sequencing rules. In order to achieve this goal, we examine several sequencing rules by means of simulation studies based on real data sampled at four different semiconductor manufacturing facilities. Our results elucidate that there is no general correlation between the magnitude of overtaking and the variance of cycl...
Output Analysis For Simulations
, 2000
"... This paper reviews statistical methods for analyzing output data from computer simulations of single systems. In particular, it focuses on the estimation of steadystate system parameters. The estimation techniques include the replication /deletion approach, the regenerative method, the batch means ..."
Abstract
 Add to MetaCart
This paper reviews statistical methods for analyzing output data from computer simulations of single systems. In particular, it focuses on the estimation of steadystate system parameters. The estimation techniques include the replication /deletion approach, the regenerative method, the batch means method, and the standardized time series method.