### Table 1. The DGE algorithm The irreducibility of the Markov chain guarantees that the stationary distribution vector

### Table 1: The stationary distribution of the median algorithm. The table lists all the transitions in the Markov chain generated by the conversations and the median algo- rithm. The state of the Markov chain is the set of times since the last request for each conversation and which circuits are open.

1999

"... In PAGE 15: ... Note that for i = 1; 2; 3, Ci only has a packet when the current time modulo 3 is i, so at each time step at most one conversation has a packet. Table1 describes the stationary distribution for the median algorithm, when tends to 0. From this table it can be seen that the expected online cost tends to 7=36 when tends to 0.... ..."

Cited by 2

### Table 2. The error in l1 norm of the stationary probabilities between the rotated Markov chain and its approximation Markov chain with fewer states K1 for di erent values of or . Imbedded at the arrival epochs: The transition matrices of the imbedded Markov chain and the rotated Markov chain can be similarly found. a) Use the in nite model as an approximation. The transition probability matrix for the rotated Markov chain is the matrix obtained by augmenting the last row of the northwest corner of the transition probability matrix for the in nite model. The in nite model is stable since the tra c intensity = 1= = = lt; 1. The solution of the stationary probabilities for the in nite model is the same as in (4). The stationary probabilities for the rotated Markov chain is given by

"... In PAGE 10: ... b) Use a nite model of fewer states to approximate the stationary probabilities of the rotated Markov chain. Similar to that in b) for the M=G=1=K queue, one may nd the the error e2(K; K1) between (K) k and (K1) k in l1 norm, which is given by e2(K; K1) = 2 K1+1 K1+2 + + K+1 1 + + + K+1 = e1(K + 1; K1): Therefore, Table2 provides e2 too. The M=Gc=1 queue: a) Use the corresponding in nite model.... ..."

### Table 1: Markov chain sampling set size as a function of w.

2006

"... In PAGE 33: ... For larger networks we allocated 100-200 seconds depending on the complexity of the network which was only a fraction of exact computation time. Table1 reports the size of the sampling set used by each algorithm where each column reports the size of the corresponding w-cutset. For example, for cpcs360b, the average size of Gibbs sample (all nodes except evidence) is 345, the loop-cutset size is 26, the size of 2-cutset is 22, and so on.... In PAGE 33: ... For example, for cpcs360b, loop-cutset sampling and 2-cutset sampling generated 600 samples per second while the Gibbs sampler was able to generate only 400 samples. We attribute this to the size of cutset sample (26 nodes or less as reported in Table1 ) compared to the size of the Gibbs sample (over 300 nodes). CPCS networks.... In PAGE 35: ...3, its loop-cutset is relatively small |LC|=47 but wLC=14 and thus, sampling just one new loop-cutset variable value is exponential in the big adjusted induced width. As a result, loop-cutset sampling computes only 4 samples per second while the 2-, 3- and 4-cutset, which are only slightly larger having 65, 57, and 50 nodes respectively (see Table1 ), compute samples at rates of 200, 150, and 90 samples per second (see Table 2).... ..."

### Table 1 Markov chain of five states

"... In PAGE 2: ... Thus, we have two parallel sequences of Bernoulli trials, which are intermittently shifted forward. The shifting process can be described as a finite Markov chain apos; with the five states shown in Table1 .... ..."

### Table 1: Markov chain based methods for PMS

1999

"... In PAGE 3: ...1 Literature survey In this section we present and compare the Markov chain-based approaches [2, 3, 5, 10, 16, 21, 22] and the one based on SAN in [4] for the dependability model- ing and analysis of PMS. The most relevant aspects of the comparison are summa- rized in Table1 .A key point that impacts most of the other aspects is represented by the sin- gle/separate modeling of the phases: it affects the reusability/flexibility of previously built models, the modeling of dependencies among phases and the complexity of the solution steps.... ..."

Cited by 11

### Table 2 Markov chain of seven states

"... In PAGE 4: ... Therefore, let A be the channel and B the processor; then II,; = I and rIai = 0 for all i. The stretching factors then become: - = 1 (the channel is unaffected) To* T, The number of states increases from five to the seven shown in Table2 as we add a third processor to the case of two processors and one storage unit; however, the number of independent param- eters increases from three to eight. This means that explicit general formulas are more difficult to obtain and more cumbersome to use.... ..."

### Table 1: Classical algorithm for the computation of PAV (t). 2.2 Stationarity detection The stationarity detection that we consider is based on the control of the sequence of vectors Vn = P n1U. Let the row vector denote the stationary probability distribution of the Markov process X. This vector veri es A = 0 and P = . The steady state availability is given by PAV (1) = 1U: To ensure the convergence of the sequence of vectors Vn, we require that the uniformization rate veri es gt; max(?A(i; i); i 2 S) since this guarantees that the transition probability matrix P is aperiodic. We then have, for every i 2 S,

1996

"... In PAGE 11: ... For every n 0, we have 0 v0 n 1. It follows that, using the truncation step N de ned in Relation (2), we get the classical algorithm to compute the expected interval availability, by writing EIAV (t) = N X n=0 e? t ( t)n n! v0 n + e0(N); where e0(N) = 1 X n=N+1 e? t ( t)n n! v0 n 1 X n=N+1 e? t ( t)n n! = 1 ? N X n=0 e? t ( t)n n! quot;: This algorithm is basically as the one depicted in Table1 . More precisely the computation of vn in Table 1 must be followed by the the recursion (8), with v0 0 = v0, and in the last loop over j, vn must be replaced by v0 n in order to get EIAV (tj) instead of PAV (tj).... In PAGE 11: ... It follows that, using the truncation step N de ned in Relation (2), we get the classical algorithm to compute the expected interval availability, by writing EIAV (t) = N X n=0 e? t ( t)n n! v0 n + e0(N); where e0(N) = 1 X n=N+1 e? t ( t)n n! v0 n 1 X n=N+1 e? t ( t)n n! = 1 ? N X n=0 e? t ( t)n n! quot;: This algorithm is basically as the one depicted in Table 1. More precisely the computation of vn in Table1 must be followed by the the recursion (8), with v0 0 = v0, and in the last loop over j, vn must be replaced by v0 n in order to get EIAV (tj) instead of PAV (tj). 3.... ..."

Cited by 4

### Table 1: Classical algorithm for the computation of PAV (t). 2.2 Stationarity detection The stationarity detection that we consider is based on the control of the sequence of vectors Vn = P n1U. Let the row vector denote the stationary probability distribution of the Markov process X. This vector veri es A = 0 and P = . The steady state availability is given by PAV (1) = 1U: To ensure the convergence of the sequence of vectors Vn, we require that the uniformization rate veri es gt; max(?A(i; i); i 2 S) since this guarantees that the transition probability matrix P is aperiodic. We then have, for every i 2 S,

1996

"... In PAGE 11: ... For every n 0, we have 0 v0 n 1. It follows that, using the truncation step N de ned in Relation (2), we get the classical algorithm to compute the expected interval availability, by writing EIAV (t) = N X n=0 e? t ( t)n n! v0 n + e0(N); where e0(N) = 1 X n=N+1 e? t ( t)n n! v0 n 1 X n=N+1 e? t ( t)n n! = 1 ? N X n=0 e? t ( t)n n! quot;: This algorithm is basically as the one depicted in Table1 . More precisely the computation of vn in Table 1 must be followed by the the recursion (8), with v0 0 = v0, and in the last loop over j, vn must be replaced by v0 n in order to get EIAV (tj) instead of PAV (tj).... In PAGE 11: ... It follows that, using the truncation step N de ned in Relation (2), we get the classical algorithm to compute the expected interval availability, by writing EIAV (t) = N X n=0 e? t ( t)n n! v0 n + e0(N); where e0(N) = 1 X n=N+1 e? t ( t)n n! v0 n 1 X n=N+1 e? t ( t)n n! = 1 ? N X n=0 e? t ( t)n n! quot;: This algorithm is basically as the one depicted in Table 1. More precisely the computation of vn in Table1 must be followed by the the recursion (8), with v0 0 = v0, and in the last loop over j, vn must be replaced by v0 n in order to get EIAV (tj) instead of PAV (tj). 3.... ..."

Cited by 4

### Table 4: Mapping the algorithm semantics onto the Markov chain. Properties are with respect to P1 only. Entries s1,s2:a,b imply transitions a and b from state s1 and s2. All=All states.

"... In PAGE 4: ... This is done by mapping the semantics of each coherence algorithm (as described in Table 2) onto the Markov model of Figure 2. This mapping is shown in Table4 . Here, for each algorithm, we show which transitions in the Markov chain correspond to cache misses, invalidations, writebacks, and region status vector checks.... In PAGE 11: ... These are derived by mapping the semantics of each coher- ence algorithm onto the Markov chain shown in Figure 2. This mapping is shown for each algorithm in Table4 . The symbols M, Is, Ir, Es, and Er are defined in Table 3.... ..."