### Table 2: Comparison of the multiple Markov chain simulation results for the proposed method (PSSM+SOV) and the standard Gibbs sampling method. Correct alignments are de ned when the number of correctly aligned sequences are equal to or larger than the cuto in the second column. The rate of correct alignments are obtained from multiple Markov chain simulations, 200 Markov chains for the proposed method (PSSM+SOV) and 100 Markov chains for the standard Gibbs sampling (PSSM alone). The number of Markov chains that nd the correct alignments are reported in the third and fourth columns.

2006

"... In PAGE 16: ... For this type of data, the predicted secondary structure enhances the motif pattern, therefore the true motif is easier to be identi ed under the new model. As demonstrated in Table2 , the proposed alignment method with secondary structure information nds the true motif of 1r69 much more frequently (3.85 more times) than the standard Gibbs sampling method.... In PAGE 16: ...ore frequently (3.85 more times) than the standard Gibbs sampling method. [Figure 3 about here.] Table2 shows comparisons of our proposed model with the standard Gibbs sampling method. For each data set, the alignments obtained by both methods are compared to the structural alignments in BAliBASE.... In PAGE 16: ... A good alignment is de ned when a large number of sequences out of the total number in each data set are correctly aligned. The criteria of determining good alignments are listed in the second column in Table2 . Multiple Markov chain simulations are used for the proposed method (PSSM+SOV) and the standard Gibbs sampling, where the proposed method runs 200 Markov chains, 50 runs at each of four =0:5; 1; 1:5; 2, and the standard Gibbs sampling runs 100 Markov chains.... In PAGE 16: ...5%. [ Table2 about here.] Further comparisons of the proposed method (PSSM+SOV) with ClustalW, Dialign, and PRRP are displayed in Table 3.... ..."

### Table 1: Markov chain based methods for PMS

1999

"... In PAGE 3: ...1 Literature survey In this section we present and compare the Markov chain-based approaches [2, 3, 5, 10, 16, 21, 22] and the one based on SAN in [4] for the dependability model- ing and analysis of PMS. The most relevant aspects of the comparison are summa- rized in Table1 .A key point that impacts most of the other aspects is represented by the sin- gle/separate modeling of the phases: it affects the reusability/flexibility of previously built models, the modeling of dependencies among phases and the complexity of the solution steps.... ..."

Cited by 11

### Table 1. Maximized log-likelihood and BIC for the three nonparametric Markov chain models.

"... In PAGE 9: ... The number of observations is a17 a4 a3 a19 a21 a17a9a19a9a19a41a4a7a6a14a9a12a8a86a17a15a12 a3 . Table1 displays the maximized log- likelihood a18a21a20a23a22 a38 a9 a75 , a78 a4 a21a27a8a15a6 a8a11a10 and the corresponding BIC for the four geographic regions. Table 2 shows the resulting values of the likelihood ratio test statistic and the corresponding p-values.... In PAGE 20: ... Evidently, when performance is gauged in terms of the area under the ROC curve, the models perform better in the North (both Northeast and Northern tor- nado alley) than in the South. This is the same pattern displayed according to the BIC criterion ( Table1 ). In terms of the reliability MSE, the model performs well in SE and NT but its performance in NE is only marginal.... In PAGE 27: ... Table1 . Maximized log-likelihood and BIC for the three nonparametric Markov chain models.... ..."

Cited by 1

### TABLE II THE BERNOULLI AND 2-STATE MARKOV CHAIN MODEL PARAMETERS

1999

Cited by 178

### TABLE II THE BERNOULLI AND 2-STATE MARKOV CHAIN MODEL PARAMETERS

1999

Cited by 178

### Table 4.3: Markov chain model results for 68020 workload

1989

Cited by 2

### Table 4.7: Markov chain model results for 88100 workload

1989

Cited by 2

### Table 5: Values of the interaction energy parameters for different polymer blend systems obtained by using GA-Markov chain modeling technique.

"... In PAGE 15: ... After this the next 1000 generations were saved (every individual in these generations had g lt; 0) for use in forming the transition matrix described below. Table5 gives the results. In order to be sure that running the GA for more time would give an additional improvement we evaluated the standard deviation of g for the last 1000 generations and computed the variance.... In PAGE 15: ... We found the variance was small if the GA did reach a steady state. The variance values are also listed in Table5 . From the variance values we see that there is a very small fluctuation in the g values.... In PAGE 16: ...16 from the best fit individual in each of the 1000 generations sampled after a steady state was reached and before the Markov Chain method was applied. The error bars in this case were over four orders of magnitude higher than that shown in Table5 . This was just not acceptable.... In PAGE 24: ...bove. The exponent n was set also to 1000. From the final vector obtained (q(1000)) the final 1000 individual generation is reconstructed and searched for the individual with the best fitness value. Table5 gives these results. Error bars are from five separate simulations starting with a new random generation in the GA modeling phase.... ..."

### Table 4: Results of IMM based classifiers on different orders of Markov chain.

"... In PAGE 10: ...5 Performance of the Interpolated Markov Models and their Feature Spaces The third set of experiments is focused on evaluating the performance of the interpolated Markov models and the performance of SVM when the various sequences are represented in the feature space used by these models. The classification accuracy achieved by these techniques is shown in Table4 . This table contain three sets of experiments.... In PAGE 10: ...3 shows, this is done automatically by SVM. From the results shown in Table4 we can see that the SVM classifier outperforms the IMM based techniques for the majority of the datasets. The only class of datasets that IMM does better than SVM are the ones derived from peptidias (P-CM, P-CS, P-MS), in which the higher order IMM models do considerably better than the corresponding SVM models.... ..."

### Table 3. Number of rounds till convergence: Markov chain distributions

2000

"... In PAGE 29: ...2 3 4 6 7 8 910 35 10 15 20 25 30 5 5 1 2 3 4 6 7 8 9 35 10 15 25 30 5 5 1 15 20 probability of occurence % % probability of occurence round of convergence 30 33 20 15 round of convergence 10 average Figure 6. Complete distribution for test instance 1 and 4 of Table3 : for pattern axxbxxxax on the right the uniform Markov chain over the 2 letter alphabet, on the left the distribution given in Figure 3 for 3 letters. shown in Table 4 shows the expected behavior when comparing it to the case without empty substitutions.... ..."

Cited by 3