### Table 1 State Transition Matrix for a Hidden Markov Model

"... In PAGE 25: ...alues such as Sunny = 1.0, Rainy = 0, and Foggy = 0. 2. A state transition matrix ( Table1 ) that stores the probability of going from one state to another. For example, the first row gives the probability of a sunny day following a sunny day, a rainy day following a sunny day, a foggy day following a sunny day, and so on.... ..."

### (Table 2). 2.3.2 Hidden Markov Model (HMM)

"... In PAGE 5: ....3.1 Video Analysis The video analysis was performed by two expert surgeons encoding the video of each step of the surgical procedure frame by frame (NTSC - 30 frames per second). The encoding process used a code-book of 14 different discrete tool maneuvers in which the endoscopic tool was interacting with the tissue (Table2 ). Each identified surgical tool/tissue interaction, had a unique F/T pattern.... In PAGE 8: ... 1b. (a) Forces (b) Torques Studying the magnitudes of F/T applied by R1 and ES during each step of the MIS procedures for the different tool/tissue interactions (Table2 ) using the grand median analysis showed that the F/T magnitudes applied by these groups were significantly different (p lt;0.05) and task dependent (Fig.... ..."

### Table 2. Performance of the hierarchical Markov model

2003

"... In PAGE 5: ...e., row 1 and 2 of Table2 ) provide a reference value for performance evaluation of the hMM. It is obvious that the hMM incurs approximately 60% (Table 2 column I - row 3 and 4) over- head for the inter-arrival-rate and, therefore, is rendering unsatis- factory performance.... In PAGE 5: ...races (i.e., row 1 and 2 of Table 2) provide a reference value for performance evaluation of the hMM. It is obvious that the hMM incurs approximately 60% ( Table2 column I - row 3 and 4) over- head for the inter-arrival-rate and, therefore, is rendering unsatis- factory performance. The burst-length random variable usually takes on small values since most of the bits are not corrupted dur- ing transmission and, hence, result in small (bit error) bursts.... In PAGE 5: ... Therefore, it is important to quantify the hMM burst-length per- formance with respect to the source-based traces. It is obvious that for the burst-length random variable, the ENK distance between the hMM- and source-based traces ( Table2 column B - row 3 and 4) is much larger as opposed to the ENK between two source- based traces (Table 2 column B - row 1 and 2). We conclude that although the hMM performs adequately in characterizing hybrid (i.... In PAGE 5: ... Therefore, it is important to quantify the hMM burst-length per- formance with respect to the source-based traces. It is obvious that for the burst-length random variable, the ENK distance between the hMM- and source-based traces (Table 2 column B - row 3 and 4) is much larger as opposed to the ENK between two source- based traces ( Table2 column B - row 1 and 2). We conclude that although the hMM performs adequately in characterizing hybrid (i.... In PAGE 7: ... Table 6 enumerates the performance of the HMM. Comparing the I col- umn of Table2 (row 3 and 4) with Table 6 outlines that the HMM shows clear improvement in the inter-arrival-rate performance, for instance, 40.... In PAGE 7: ...nstance, 40.33% as opposed to 58.72% for the hMM. However, the ENK for the burst-length random variable in the HMM case (Table 6 column B ) is orders of magnitude greater than the re- spective ENK for the hMM traces ( Table2 column B - row 3 and 4). Hence we conclude that, while the HMM improves the model- ing of good bursts , (when compared to the hMM) the hidden Markov model can not approximate the bad bursts adequately.... ..."

Cited by 11

### Table 1. Comparison of the factorial HMM on four problems of varying size. The negative log likelihood for the training and test set, plus or minus one standard deviation, is shown for each problem size and algorithm, measured in bits per observation (log likelihood in bits divided by NT) relative to the log likelihood under the true generative model for that data set.7 True is the true generative model (the log likelihood per symbol is de ned to be zero for this model by our measure); HMM is the hidden Markov model with KM states; Exact is the factorial HMM trained using an exact E step; Gibbs is the factorial HMM trained using Gibbs sampling; CFVA is the factorial HMM trained using the completely factorized variational approximation; SVA is the factorial HMM trained using the structured variational approximation. M K Algorithm Training Set Test Set

1997

"... In PAGE 14: ... This provides a measure of how well the model generalizes to a novel observation sequence from the same distribution as the training data. Results averaged over 15 runs for each algorithm on each of the four problem sizes (a total of 300 runs) are presented in Table1 . Even for the smallest problem size (M = 3 and K = 2), the standard HMM with KM states su ers from over tting: the test set log likelihood is signi cantly worse than the training set log likelihood.... ..."

Cited by 279

### Table 1. Comparison of the factorial HMM on four problems of varying size. The negative log likelihood for the training and test set, plus or minus one standard deviation, is shown for each problem size and algorithm, measured in bits per observation (log likelihood in bits divided by NT) relative to the log likelihood under the true generative model for that data set.7 True is the true generative model (the log likelihood per symbol is de ned to be zero for this model by our measure); HMM is the hidden Markov model with KM states; Exact is the factorial HMM trained using an exact E step; Gibbs is the factorial HMM trained using Gibbs sampling; CFVA is the factorial HMM trained using the completely factorized variational approximation; SVA is the factorial HMM trained using the structured variational approximation. M K Algorithm Training Set Test Set

1997

"... In PAGE 14: ... This provides a measure of how well the model generalizes to a novel observation sequence from the same distribution as the training data. Results averaged over 15 runs for each algorithm on each of the four problem sizes (a total of 300 runs) are presented in Table1 . Even for the smallest problem size (M = 3 and K = 2), the standard HMM with KM states su ers from over tting: the test set log likelihood is signi cantly worse than the training set log likelihood.... ..."

Cited by 279

### Table 6: Multiple Fault Diagnosis - Part 2 the diagnosis process, e.g. the non-intermittency of faults assumption or the single fault assumption.

1993

Cited by 9

### Table 1. Observations of a Hidden Markov Model for a meeting of 4 individuals

2005

"... In PAGE 2: ... One observation is a vector containing a binary value (speaking, not speaking) for each individual that is recorded. This vector is transformed to a 1-dimensional discrete code used as input for the HMM (see Table1 ). The automatic speech detector has a sampling rate of 62.... ..."

Cited by 5

### Table 1. Comparison of the factorial HMM on four problems of varying size. The negative log likelihood for the training and test set, plus or minus one standard deviation, is shown for each problem size and algorithm, measured in bits per observation (log likelihood in bits divided by NT) relative to the log likelihood under the true generative model for that data set.7 True is the true generative model (the log likelihood per symbol is de ned to be zero for this model by our measure); HMM is the hidden Markov model with KM states; Exact is the factorial HMM trained using an exact E step; Gibbs is the factorial HMM trained using Gibbs sampling; CFVA is the factorial HMM trained using the completely factorized variational approximation; SVA is the factorial HMM trained using the structured variational approximation. M K Algorithm Training Set Test Set

"... In PAGE 14: ... This provides a measure of how well the model generalizes to a novel observation sequence from the same distribution as the training data. Results averaged over 15 runs for each algorithm on each of the four problem sizes (a total of 300 runs) are presented in Table1 . Even for the smallest problem size (M = 3 and K = 2), the standard HMM with KM states su ers from over tting: the test set log likelihood is signi cantly worse than the training set log likelihood.... ..."