### Table 2. Test results of classi cation of 8 vowel data clusters projected from 12 dimensions to 2D and 3D.

### Table 1. Comparison of observed compositions with both possibilistic, extended possibilistic and probabilistic models (all ps lt;.0005).

"... In PAGE 3: ...ramework (e.g., negative predicted values). In order to reinforce the case against Possibility Theory, those measures were withdrawn from computation of the agreement between data and the probabilistic model. Results show that the probabilistic and possibilistic models both fitted the data ( Table1... ..."

### Table 1: Comparing clustering alternatives for vertical relation clustered Projection Projection Selection Selection Join Join

### Table 2: Cascaded Chunking model vs Probabilistic model

"... In PAGE 5: ... 5.2 Experimental Results The results for the new cascaded chunking model as well as for the previous probabilistic model based on SVMs (Kudo and Matsumoto, 2000) are summa- rized in Table2 . We cannot employ the experiments for the probabilistic model using large dataset, since the data size is too large for our current SVMs learn- ing program to terminate in a realistic time period.... In PAGE 5: ...3 Probabilistic model vs. Cascaded Chunking model As can be seen Table2 , the cascaded chunking model is more accurate, efficient and scalable than the probabilistic model. It is difficult to apply the probabilistic model to the large data set, since it takes no less than 336 hours (2 weeks) to carry out the experiments even with the standard data set, and SVMs require quadratic or more computational cost on the number of training examples.... ..."

### Table 1. Probabilistic User Model

2006

"... In PAGE 4: ... In these studies, users marked 85% of formula cells on average when testing and debugging spreadsheets, often placing check-marks on cells, and rarely placing a23-marks on cells. Of the cells that users marked, users in our earlier studies made mistakes according to the probabilities given in Table1 , so for our study, we simulated user behavior based on these probabilities. The bold numbers in Table 1 highlight false positive (check on incorrect value) and false negative (a23 on correct value) oracle mistakes.... ..."

Cited by 2

### Table 2: Computational Power of Deterministic and Probabilistic Discrete-Time Analog Neural Networks with the Saturated-Linear Activation Function.

"... In PAGE 17: ...4 de- pends on the descriptive complexity of their weights. The respective results are summarized in Table2 as presented by Siegelmann (1994), including the comparison with the probabilistic recurrent networks discussed in sec- tion 2.... In PAGE 24: ..., 2000). This implies that the results on the computational power of deterministic asymmetric networks summarized in Table2 are still valid for Hopfield nets with an external oscillator of certain type. Especially for rational weights, these devices are Turing universal.... In PAGE 26: ...4 (Siegelmann, 1999b). The results are summarized and compared to the corresponding deterministic models in Table2 . Thus, for integer weights, the results co- incide with those for deterministic networks (see section 2.... In PAGE 39: ... Figure 2). Furthermore, Table2 , summarizing the results concerning the computational power of recurrent neural networks, shows that the only difference between deterministic and probabilistic mod- els is in polynomial time computations with rational weights, which are characterized by the corresponding Turing complexity classes P and BPP. This means that from the computational power point of view, stochasticity plays a similar role in neural networks as in conventional Turing computa- tions.... ..."

### Table 8. Classi cation of probabilistic structure classes (second phase data)

2006

"... In PAGE 16: ...hem hard), the Mneimneh-Sakallah suite (s27, 298, ..., a total of 52 instances of which 40 are hard), and the Katz suite (jmc quant, 20 instances, of which 13 are hard). In Table8 we show the classi cation of the probabilistic structure classes included in the evaluation test set according to the solvers admitted to the second phase. Table 8 is arranged similarly to Table 7, where the data about Nested Counterfactuals has been sum- marised in a single entry, QHorn instances are divided into Horn (\Horn quot;) and renamable Horn (renHorn) families, and Robot instances are presented split into four families corre- sponding to the number of obstacles known in advance.... In PAGE 16: ... Model A instances are split into 24 classes corresponding to six di erent values of the alternation depth (from 0 to 5 alterna- tions), each family comprised of instances with di erent number of variables (20,40,80,160). According to the data summarised in Table8 , this part of the evaluation second phase consisted of 2640 instances, of which 2029 have been solved, 1126 declared satis able and... ..."

Cited by 3

### Table 8. Classification of probabilistic structure classes (second phase data)

2006

"... In PAGE 16: ...hem hard), the Mneimneh-Sakallah suite (s27, 298, ..., a total of 52 instances of which 40 are hard), and the Katz suite (jmc quant, 20 instances, of which 13 are hard). In Table8 we show the classification of the probabilistic structure classes included in the evaluation test set according to the solvers admitted to the second phase. Table 8 is arranged similarly to Table 7, where the data about Nested Counterfactuals has been sum- marised in a single entry, QHorn instances are divided into Horn ( Horn ) and renamable Horn (renHorn) families, and Robot instances are presented split into four families corre- sponding to the number of obstacles known in advance.... In PAGE 16: ... Model A instances are split into 24 classes corresponding to six different values of the alternation depth (from 0 to 5 alterna- tions), each family comprised of instances with different number of variables (20,40,80,160). According to the data summarised in Table8 , this part of the evaluation second phase consisted of 2640 instances, of which 2029 have been solved, 1126 declared satisfiable and 903 declared unsatisfiable, resulting in 126 easy, 1687 medium, 216 medium-hard, and 611 hard instances. These results indicate that overall the selected probabilistic classes are within the capabilities of current state-of-the-art QBF solvers, but in some cases they are challenging as much as structured ones.... ..."

Cited by 3

### Table 2: Classification accuracy of 7-EPPC projected clusters (70% training data)

2004

"... In PAGE 8: ... They are the projected clusters with 17 principle components as projected dimensions obtained by ORCLUS and projected clusters with projected dimensions generated by set of seven EPs with EPPC. Their performances in classification respected to different combinations of initial and final clusters number are compared and shown in the Table2 . The bolded entries in Table 2 are the experimental results that 7-EPPCs give a better performance when compare with ORCLUS projected clusters with 17 projected dimensions in Table 1.... In PAGE 8: ... Their performances in classification respected to different combinations of initial and final clusters number are compared and shown in the Table 2. The bolded entries in Table2 are the experimental results that 7-EPPCs give a better performance when compare with ORCLUS projected clusters with 17 projected dimensions in Table 1. We found that the performance of 7-EPPCs obtained from our proposed EPPC algorithm is slightly better than the representative ORCLUS projected clusters.... ..."

Cited by 3

### Table 5. A decomposable probabilistic model is in-

in A Simple Approach to Building Ensembles of Naive Bayesian Classifiers for Word Sense Disambiguation