## Computing Densities for Markov Chains via Simulation (1999)

Venue: | Mathematics of Operations Research |

Citations: | 7 - 1 self |

### BibTeX

@ARTICLE{Henderson99computingdensities,

author = {Shane G. Henderson and Peter W. Glynn},

title = {Computing Densities for Markov Chains via Simulation},

journal = {Mathematics of Operations Research},

year = {1999},

volume = {26},

pages = {375--400}

}

### OpenURL

### Abstract

We introduce a new class of density estimators, termed look-ahead density estimators, for performance measures associated with a Markov chain. Look-ahead density estimators are given for both transient and steady-state quantities. Look-ahead density estimators converge faster (especially in multi-dimensional problems) and empirically give visually superior results relative to more standard estimators, such as kernel density estimators. Several numerical examples that demonstrate the potential applicability of look-ahead density estimation are given. 1 Introduction Visualization is becoming increasingly popular as a means of enhancing one's understanding of a stochastic system. In particular, rather than just reporting the mean of a distribution, one often finds that more useful conclusions may be drawn by seeing the density of the underlying random variable. We will consider the problem of computing the densities of performance measures associated with a Markov chain. For chains on a ...

### Citations

3016 |
Convergence of Probability Measures
- Billingsley
- 1968
(Show Context)
Citation Context ...that we are usingsto represent both the stationary distribution and its density with respect to fl. The appropriate interpretation should be clear from the context.) Then, Ps(X(1) 2 B) = Ps(X(0) 2 B) =-=(3)-=- for all (suitably) measurable B ` S. But Ps(X(0) 2 B) = (B). And Ps(X(1) 2 B) = Z S (dx)P (X(1) 2 BjX(0) = x) = Z S Z B (dx)p(x; y)fl(dy) = Z B Z S (dx)p(x; y)fl(dy): (4) It follows from (3) and (4) ... |

617 |
Approximation Theorems of Mathematical Statistics
- Serfling
- 1980
(Show Context)
Citation Context ...jsZ 1 \Gamma1 jp n (y) \Gamma p(y)j dy ) 0 as n !1, from which it follows that Qn ) q as n !1. But Fn (Qn ) \Gamma Fn (q) = F (q) \Gamma Fn (q) (22) and Fn (Qn ) \Gamma Fn (q) = pn ( n )(Qn \Gamma q) =-=(23)-=- 18 wheresn lies between Qn and q. Relations 2 and 3 imply that pn ( n ) ) p(q) as n !1. The result then follows from (22), (23), and relation 4. 2 Remark 15: Similar issues to those discussed in Rema... |

572 |
Multivariate Density Estimation: Theory, Practice and Visualization
- Scott
- 1992
(Show Context)
Citation Context ... N 2 as n !1. Furthermore, n 1=2 (p n (y n ) \Gamma p(y )) = n 1=2 (p n (y n ) \Gamma pn (y )) + n 1=2 (p n (y ) \Gamma p(y )) = rpn ( n ) \Delta n 1=2 (y n \Gamma y ) + n 1=2 (p n (y ) \Gamma p(y )) =-=(21)-=- wheresn lies on the line segment joining y n and y . Since rpn ( n ) ! rp(y ) = 0 a.s. and we have established weak convergence of n 1=2 (y n \Gamma y ) above, evidently the first term in (21) conver... |

527 |
Applied probability and queues
- Asmussen
- 2003
(Show Context)
Citation Context ...y be interpreted as the "cost" associated with running X over [n \Gamma 1; n). Then, C(n) = n X i=1 \Gamma(i) is the cumulative cost corresponding to the time interval [0; n). We assume that=-= P (\Gamma(1)-=- 2 dy 1 ; : : : ; \Gamma(n) 2 dyn jX) = n Y i=1 P (\Gamma(i) 2 dy i jX(i \Gamma 1); X(i)) (1) so that, conditional on X , the \Gamma(i)'s are independent r.v.'s and the conditional distribution of \Ga... |

169 | A Course in Density Estimation
- Devroye
- 1987
(Show Context)
Citation Context ...ry distributionshas a fl-density (\Delta) having value (y) = Z S (dx)p(x; y) (5) at y 2 S. 2 According to Proposition 2, the density (y) may be expressed as an expectation, namely (y) = Esp(X(j); y); =-=(6)-=- see (5). Relation (6) suggests using the estimator n (y) = 1 n n\Gamma1 X i=0 p(X(i); y) to compute (y); n (y) requires simulating X up to time n \Gamma 1. To establish laws of large numbers and CLT'... |

61 |
Nonparametric Functional Estimation
- Rao
- 1983
(Show Context)
Citation Context ... so large that y n lies in the ffl-neighbourhood specified by relations 3 -- 5.) So, rpn (y n ) \Gamma rpn (y ) = \Gamma(rp n (y ) \Gamma rp(y )): But rpn (y n ) \Gamma rpn (y ) = Hn (y n \Gamma y ); =-=(20)-=- where Hn ! H(y ) a.s. as n !1. It follows that n 1=2 (y n \Gammay ) ) \GammaH (y ) \Gamma1 ~ N 2 as n !1. Furthermore, n 1=2 (p n (y n ) \Gamma p(y )) = n 1=2 (p n (y n ) \Gamma pn (y )) + n 1=2 (p n... |

55 | Likelihood Ratio Gradient Estimation for Stochastic Systems
- Glynn
- 1990
(Show Context)
Citation Context ... q ) 0 as n !1. Proof: Evidently, E j i1 (y)j q ! 1 (8) 10 for fl a.e. y. Note that fi fi fi fi fi fi 1 n n X j=1 ( ij (y) \Gamma p(y; i)) fi fi fi fi fi fi q 1 n n X j=1 j ij (y) \Gamma p(y; i)j q ; =-=(9)-=- due to the convexity of jxj q . For each y satisfying (8), the right hand side of (9) converges a.s. and in expectation to Esj i1 (y) \Gamma p(y; i)j q . Consequently, the right-hand side of (9) is u... |

34 |
Nonparametric Density Estimation: The L 1 view
- Devroye, Gyorfi
- 1985
(Show Context)
Citation Context ...1) 2 BjX(0) = x) = Z S Z B (dx)p(x; y)fl(dy) = Z B Z S (dx)p(x; y)fl(dy): (4) It follows from (3) and (4) that the stationary distributionshas a fl-density (\Delta) having value (y) = Z S (dx)p(x; y) =-=(5)-=- at y 2 S. 2 According to Proposition 2, the density (y) may be expressed as an expectation, namely (y) = Esp(X(j); y); (6) see (5). Relation (6) suggests using the estimator n (y) = 1 n n\Gamma1 X i=... |

32 |
A Guide to Simulation. 2nd ed
- Bratley, Fox, et al.
- 1987
(Show Context)
Citation Context ... Ps(X(1) 2 B) = Ps(X(0) 2 B) (3) for all (suitably) measurable B ` S. But Ps(X(0) 2 B) = (B). And Ps(X(1) 2 B) = Z S (dx)P (X(1) 2 BjX(0) = x) = Z S Z B (dx)p(x; y)fl(dy) = Z B Z S (dx)p(x; y)fl(dy): =-=(4)-=- It follows from (3) and (4) that the stationary distributionshas a fl-density (\Delta) having value (y) = Z S (dx)p(x; y) (5) at y 2 S. 2 According to Proposition 2, the density (y) may be expressed ... |

21 | Correlation-induction techniques for estimating quantiles in simulation”. Operations Research 46:574–591
- Avramidis, Wilson
- 1998
(Show Context)
Citation Context ...and covariance matrix \Sigma i (~y) having (j; k)th element given by cov( i1 (y j );si1 (y k )). Proposition 1 suggests the approximation ~p in (~y) Ds~p(~y; i) + n \Gamma1=2 \Sigma 1=2 i (~y)N(0; I) =-=(2) for n lar-=-ge, where Dsdenotes the (non-rigorous) relation "has approximately the same distribution as", and \Sigma 1=2 i (~y) is a square root (Cholesky factor) of the non-negative definite symmetric ... |

18 |
Quantile estimation in dependent sequences
- Heidelberger, Lewis
- 1984
(Show Context)
Citation Context ...ble. Also, for each such y, the left-hand side of (9) converges to zero a.s. Since the left-hand side is dominated by a uniformly integrable sequence, it follows that Ejp in (y) \Gamma p(y; i)j q ! 0 =-=(10)-=- as n !1 for fl a.e. y. Also, taking the expectation of both sides of (9) yields the inequality Ejp in (y) \Gamma p(y; i)jsEsj i1 (y) \Gamma p(y; i)j qsEsmax(j i1 (y)j q ; p(y; i) q )sEs(j i1 (y)j q +... |

15 |
Nonparametric Density and Regression Estimation for Markov Sequences Without Mixing Assumptions
- YAKOWITZ
- 1989
(Show Context)
Citation Context ...ere Fn (x) = 1 n n\Gamma1 X j=0 Z x \Gamma1 p(X j ; y) dy: The proof of the following proposition rests primarily on the observation that Z x \Gamma1 p(X j ; y) dy = E[I(X j+1sx)jX 0 ; : : : ; X j ]; =-=(25)-=- so that the estimators Fn and ~ Fn are related through the principle of extended conditional Monte Carlo (Bratley et al. 1987 p. 71, Glasserman 1993). Let p be a density of the stationary distributio... |

13 | Control variates for probability and quantile estimation
- Hesterberg, Nelson
- 1998
(Show Context)
Citation Context ...; y) ! ffl whenever y = 2 K i (x i ; ffl). Put K = K 1 (x 1 ; ffl) [ \Delta \Delta \Delta [ K r (x r ; ffl) and note that K is compact. Theorem 6 establishes that sup y2K j n (y) \Gamma (y)j ! 0 a.s. =-=(13)-=- as n !1. To deal with y = 2 K, construct the sequence (X 0 (n) : ns0) so that X 0 (n) = X(n) whenever X(n) = 2 K(ffl) and X 0 (n) is the closest point within the collection fx 1 ; : : : ; x r g whene... |

12 |
Nonparametric density estimation, prediction, and regression for markov sequences
- Yakowitz
- 1985
(Show Context)
Citation Context ... (\Delta) may be expressed as n \Gamma1 P n i=1 G i (\Delta), for some sequence of random functions (G i (\Delta) : is1), and then Fn (x) = Z x \Gamma1 pn (y) dy = 1 n n X i=1 Z x \Gamma1 G i (y) dy: =-=(24)-=- Evidently, (24) is a sample mean over a sequence of real-valued r.v.'s, and the sequence is either i.i.d. or regenerative, depending on the context. Therefore, standard methods may be applied to esti... |

11 |
Simulating Stable Stochastic Systems, VI: Quantile Estimation
- Iglehart
- 1976
(Show Context)
Citation Context ...X(j); y)I(X(j) = 2 K(ffl)) fi fi fi fi fi fi 1 n n X j=1 (p(X(j); y) \Gamma p(X 0 (j); y))I(X(j) 2 K(ffl)) fi fi fi fi fi fi + 1 n n X j=1 p(X 0 (j); y)I(X(j) 2 K(ffl)) + n n X j=1 I(X(j) = 2 K(ffl)) =-=(14)-=-s2ffl 1 n n X j=1 I(X(j) 2 K(ffl)) + n n X j=1 I(X(j) = 2 K(ffl)): Sending n !1 allows us to conclude that (y)s(2 + )ffl for y = 2 K. The inequality (14) then yields lim sup n!1 sup y=2K j n (y) \Gamm... |

9 |
Regenerative Steady-State Simulation of Discrete-Event Systems
- Glynn, Henderson
- 2001
(Show Context)
Citation Context ... as n !1, from which the theorem follows. 2 We turn next to obtaining the analogous result for the steady-state density estimator n (\Delta) of Section 3. Theorem 5 Suppose Z S p(x; y) q fl(dy)sV (x) =-=(11)-=- for x 2 S, with qs1. If A3 holds and the initial distributionshas a density with respect to fl, then kn (\Delta) \Gamma (\Delta)k q ) 0 as n !1. Proof: Condition (11) guarantees that EsZ S p(X(0); y)... |

8 |
A batching approach to quantile estimation in regenerative simulations
- Seila
- 1982
(Show Context)
Citation Context ...s n !1; see Remark 9. Hence, sup x jF n (x) \Gamma F (x)jsZ 1 \Gamma1 jp n (y) \Gamma p(y)j dy ) 0 as n !1, from which it follows that Qn ) q as n !1. But Fn (Qn ) \Gamma Fn (q) = F (q) \Gamma Fn (q) =-=(22)-=- and Fn (Qn ) \Gamma Fn (q) = pn ( n )(Qn \Gamma q) (23) 18 wheresn lies between Qn and q. Relations 2 and 3 imply that pn ( n ) ) p(q) as n !1. The result then follows from (22), (23), and relation 4... |

6 |
Filtered monte carlo
- Glasserman
- 1993
(Show Context)
Citation Context ...ositive definite and there exists a consistent estimator \Sigma n (~y) for \Sigma(~y), then Theorem 3 asserts that for D ` IR d , Ps(~(~y) 2 ~ n (~y) \Gamma \Sigma n (~y) 1=2 D)sP (N(0; I) 2 n 1=2 D) =-=(7)-=- for large n, where \Sigma n (~y) 1=2 D is defined to be the set fx : x = \Sigma n (~y) 1=2 ~ w for some ~ w 2 Dg etc. Approximate confidence regions for ~(~y) can then be easily obtained from (7). If... |

6 |
A batch means methodology for estimation of a nonlinear function of a steady-state mean
- Munoz, Glynn
- 1997
(Show Context)
Citation Context ...Gamma p(y 2 )) ) (N 1 ; N 2 ) as n !1, where (N 1 ; N 2 ) is bivariate Gaussian and p(y 2 ) ? 0, then n 1=2 ` pn (y 1 ) pn (y 2 ) \Gamma p(y 1 ) p(y 2 ) ' ) (N 1 \Gamma (p(y 1 )=p(y 2 ))N 2 )=p(y 2 ) =-=(19)-=- as n !1. If the covariance matrix of (N 1 ; N 2 ) can be consistently estimated (as with, for example, the regenerative method), then confidence intervals for the relative likelihood (based on (19)) ... |

2 | Estimation of stationary densities for Markov chains
- Glynn, Henderson
- 1998
(Show Context)
Citation Context ...ead density estimators introduced in Section 2. Theorem 4 Suppose that E R j i1 (y)j q fl(dy) ! 1 for qs1. Then, kp in (\Delta) \Gamma p(\Delta; i)k q ) 0 as n !1. Proof: Evidently, E j i1 (y)j q ! 1 =-=(8)-=- 10 for fl a.e. y. Note that fi fi fi fi fi fi 1 n n X j=1 ( ij (y) \Gamma p(y; i)) fi fi fi fi fi fi q 1 n n X j=1 j ij (y) \Gamma p(y; i)j q ; (9) due to the convexity of jxj q . For each y satisfyi... |

2 |
Improved distribution quantile estimation
- Kappenman
- 1987
(Show Context)
Citation Context ...(X(j) 2 K(ffl)) + n n X j=1 I(X(j) = 2 K(ffl)): Sending n !1 allows us to conclude that (y)s(2 + )ffl for y = 2 K. The inequality (14) then yields lim sup n!1 sup y=2K j n (y) \Gamma (y)js(4 + 2)ffl: =-=(15)-=- 13 Since ffl was arbitrary, (13) and (15) together imply the theorem. 2 The following consequence of Theorem 7 improves Theorem 5 from convergence in probability to a.s. convergence when q = 1, and i... |

1 |
Asymptotic results for steady-state quantile estimation in Markov chains
- Henderson, Glynn
- 1999
(Show Context)
Citation Context ...ingsley 1968), there exists a compact set K(ffl) for whichsassigns at most ffl mass to its complement. Write n (y) = 1 n n X j=1 p(X(j); y)I(X(j) 2 K(ffl)) + 1 n n X j=1 p(X(j); y)I(X(j) = 2 K(ffl)): =-=(12)-=- Lets= supfp(x; y) : x; y 2 Sg ! 1. The second term on the right-hand side of (12) may be bounded by n \Gamma1 P n j=1 I(X(j) = 2 K(ffl)), which has an a.s. limit supremum of at most ffl. As for the f... |

1 |
A batch means methodology for the estimation of quantiles of the steady-state distribution
- Munoz
- 1998
(Show Context)
Citation Context ...j 0 n (y) \Gammas0 (y)j ! 0 a.s. (16) 14 as n !1 for each compact K ` S. Furthermore, if jp 0 (x; y)jsV 1=2 (x) for x 2 S, then there exists d(y) such that n 1=2 ( 0 n (y) \Gammas0 (y)) ) d(y)N(0; 1) =-=(17)-=- as n !1. Proof: Note that h \Gamma1 ((y + h) \Gamma (y)) = Z S (dx)[h \Gamma1 (p(x; y + h) \Gamma p(x; y))]: (18) But h \Gamma1 (p(x; y + h) \Gamma p(x; y)) = p 0 (x; ) whereslies between y and y + h... |

1 |
Multivariate standarized time series for output analysis in simulation experiments
- Munoz, Glynn
- 1998
(Show Context)
Citation Context ...r x 2 S, then there exists d(y) such that n 1=2 ( 0 n (y) \Gammas0 (y)) ) d(y)N(0; 1) (17) as n !1. Proof: Note that h \Gamma1 ((y + h) \Gamma (y)) = Z S (dx)[h \Gamma1 (p(x; y + h) \Gamma p(x; y))]: =-=(18)-=- But h \Gamma1 (p(x; y + h) \Gamma p(x; y)) = p 0 (x; ) whereslies between y and y + h. Because the derivative is assumed to be bounded, the Bounded Convergence Theorem then ensures that the limit in ... |