### Table 2.1: Reinforcement Learning for Spoken Dialogue Management (in a Nutshell). The rst group of references uses MDPs and the second group uses POMDPs.

2005

### Table 4: Process State Transition Probabilities

1997

"... In PAGE 5: ... Erlangian mes- sage delay distribution was chosen to study the e#0Bect of changes in variance on the behavior of the protocol. We will assume the notation given in Table4 . There are three events whose probabilities are direct functions of the stochastic model assumed: process crash, omis- sion failure and performance failure.... In PAGE 7: ... In fact, #5B11#5D and others have analyzed how consensus can be achieved with non- negligible probabilityeven when the number of crashes exceeds the resiliency of the protocol. In Table4 , we present the results for a case where ignoring resiliency constraints leads to the target consensus probability #280.999999#29 being achieved, but where enforcing them would have led to unacceptable behavior.... ..."

Cited by 5

### Table 11. Parameters of the hierarchic Markov process: Transition probabilities of main process and initial state probabilities of subprocesses.

in Contents

1996

### Table 11. Parameters of the hierarchic Markov process: Transition probabilities of main process and initial state probabilities of subprocesses.

### Table 3: Transition probabilities of the Markov Model.

"... In PAGE 12: ...Table 3: Transition probabilities of the Markov Model. The transition probabilities stemming from the generic state (i, j) are summarized in Table3 . The first row of the table is the self transition corresponding to the case in which both flows do not start transmitting a new packet.... ..."

### Table 3: Transition probabilities of the Markov Model.

"... In PAGE 12: ...Table 3: Transition probabilities of the Markov Model. The transition probabilities stemming from the generic state (i, j) are summarized in Table3 . The first row of the table is the self transition corresponding to the case in which both flows do not start transmitting a new packet.... ..."

### Table 1: Transition probabilities of Markov chain.

1993

"... In PAGE 4: ... The medium moves to the free state when no sta- tion wants to transmit (Pi ki=0), to the success- ful state when exactly one station wants to trans- mit (Pi ki=1), and to the collision state otherwise (Pi ki gt;1). The transition probabilities pw(v) are given in Table1 . In (2) ki=0 for all i, in (3) ki=1 and kj=0 for j6 =i, and in the last case Pi ki gt;1.... ..."

Cited by 1

### Table 1: Transition probabilities out of state (x; y; z) of the Markov chain Current State Next State Transition Probability

"... In PAGE 9: ... We note that denotes modulo-M addition, where M is the number of arrival slots per frame; also, If(x) is an indicator function which is equal to 1 if the boolean condition f(x) is true, and it is 0 otherwise. From Table1 we note that the next state after (x; y; z) always has an arrival slot number equal to x 1. In the rst row of Table 1, we assume that the arrival process makes a transition from state z to state z0 (from (2), this event has a probability q(zz0) i of occurring), and a segment arrives and is bu ered by the queue.... In PAGE 9: ... From Table 1 we note that the next state after (x; y; z) always has an arrival slot number equal to x 1. In the rst row of Table1 , we assume that the arrival process makes a transition from state z to state z0 (from (2), this event has a probability q(zz0) i of occurring), and a segment arrives and is bu ered by the queue. This event can only occur if z0 is positive (see Figure 4) and either vic(x) gt; 0 or y lt; Bic.... In PAGE 10: ... Since at most vic(x 1) segments are serviced during arrival slot x 1, and since exactly one segment arrives, the queue length at the end of the slot is equal to maxf0; y ? vic(x 1) + 1g. In the second row of Table1 , we assume that the arrival process makes a transition from state z to state z0 such that the arriving segment is not enqueued. This event will occur unconditionally only if the bu er has already over owed or the source is idle (i.... In PAGE 10: ... Again, at most vic(x 1) segments are serviced during arrival slot x 1, resulting in the queue length at the end of the slot being maxf0; y ? vic(x 1)g. Finally, the third row of Table1 assumes that a segment arrives to the input queue causing it to over ow. This event occurs if and only if the queue has not yet over owed, the bu er is full, and the bu er receives no service during the arrival slot (i.... ..."

### Table 2. Fourth order Markov model statistics.

2001

"... In PAGE 6: ... We could have chosen C3 to be larger than 4, but we did not want to significantly increase the complexity of the Markov model. Table2 shows the probabilities of the trace being in each state and the associated transition probabilities. The transition proba- bilities were also calculated by frequency counting.... ..."

Cited by 47

### Table 2. Fourth order Markov model statistics.

2001

"... In PAGE 9: ... We could have chosen a62 to be larger than 4, but we did not want to significantly in- crease the complexity of the Markov model. Table2 shows the probabilities of the trace being in each state and the associated transition probabilities. The transi- tion probabilities were also calculated by frequency count- ing.... ..."

Cited by 47