### Table 1: Decision Tree Accuracy

"... In PAGE 7: ... In practice, this will lead to the utility selector selecting the wrong characteristic utility function. Table1 summarizes the accuracy of the decision tree for each of the 17 utility classes obtained during the simulation experiments. The overall classification accuracy... ..."

### Table 3. AUC and EC results for pruned and unpruned decision trees.

"... In PAGE 4: ... 2(a)) and the EC (Fig. 2(b)) graphs for the pima data set and pruned trees { see Table3 . The AUC for the ROC graph is 81.... In PAGE 5: ...orithm to induce decision trees [12]. Firstly, we ran C4.5 over the data sets and calculated the AUC and EC for pruned (default parameters settings) and unpruned trees induced for each data set using 10-fold strati ed cross-validation. Table3 summarizes these results, reporting mean value results and their respec- tive standard deviations. It should be observed that for two data sets, Sonar and Glass, C4.... In PAGE 6: ... As skewed class distributions are more likely to include rare or exceptional cases, it is desirable for the induced concepts to cover these cases, even if they can only be covered by augmenting the number of small disjuncts in a concept. Table3 results indicate that the decision of not pruning the decision trees systematically increases the AUC values. For all data sets in which the algorithm was able to prune the induced trees, there is an increase in the AUC values.... In PAGE 8: ... This occurs when no Tomek links or just a few of them are found in the data sets. Table 6 shows a ranking of the AUC and EC results obtained in all ex- periments for unpruned decision trees, where: O indicates the original data set ( Table3 ) R and S stand respectively for Random and Smote over-sampling (Table 4) while S+E and S+T stand for Smote + ENN and Smote + Tomek (Table 5). p 1 indicates that the method is ranked among the best and p 2 among the second best for the corresponding data set.... ..."

### Table 3. AUC and EC results for pruned and unpruned decision trees.

"... In PAGE 4: ... 2(a)) and the EC (Fig. 2(b)) graphs for the pima data set and pruned trees { see Table3 . The AUC for the ROC graph is 81.... In PAGE 5: ...orithm to induce decision trees [12]. Firstly, we ran C4.5 over the data sets and calculated the AUC and EC for pruned (default parameters settings) and unpruned trees induced for each data set using 10-fold strati ed cross-validation. Table3 summarizes these results, reporting mean value results and their respec- tive standard deviations. It should be observed that for two data sets, Sonar and Glass, C4.... In PAGE 6: ... As skewed class distributions are more likely to include rare or exceptional cases, it is desirable for the induced concepts to cover these cases, even if they can only be covered by augmenting the number of small disjuncts in a concept. Table3 results indicate that the decision of not pruning the decision trees systematically increases the AUC values. For all data sets in which the algorithm was able to prune the induced trees, there is an increase in the AUC values.... In PAGE 8: ... This occurs when no Tomek links or just a few of them are found in the data sets. Table 6 shows a ranking of the AUC and EC results obtained in all ex- periments for unpruned decision trees, where: O indicates the original data set ( Table3 ) R and S stand respectively for Random and Smote over-sampling (Table 4) while S+E and S+T stand for Smote + ENN and Smote + Tomek (Table 5). p 1 indicates that the method is ranked among the best and p 2 among the second best for the corresponding data set.... ..."

### Table 1 Performance of decision trees

2005

"... In PAGE 14: ... First a classifier was constructed using the training data and then testing data was tested with the constructed classifier to classify the data into normal or attack. Table1 summarizes the results of the test data. It shows the training and testing times of the classifier in seconds for each of the five classes and their accuracy.... ..."

### Table 4. Known Concepts and Implementations of Thermal Probe Level Sensors - TPLSs

2005

"... In PAGE 26: ... Staring from these basic points many possible routes and variants are available to acomplish the same goal and the concepts difer quite a bit. Table4 highlights these diferences. In addition to the rechniques previously summarized, there is another concept described in Wenran et al25, but because of mising details and incomplete figures it could not be included in the table.... In PAGE 27: ...As it is shown in Table4 , al TPLSs give discrete level position indications. Reference [23], however, mentions the development of new TRICOTH variant, in which, for the region where the level indication is in, they are planning to use a combination of the DTCs profiles that yield the region determination to recover a continuous level indication in that region.... ..."

### Table 5. .Prediction using the decision tree

"... In PAGE 7: ...5. Predicting the Classes The results are illustrated in Table5 . As can be seen in the table, the results are somewhat different from those obtained using logistic regression.... ..."

### Table 2: Mean Squared Error on Deterministic Stream Data Decision Trees Naive Bayes Logistic Regression Changes

2007

"... In PAGE 8: ...5, Naive Bayes, and logistic regression as base learners. The results are shown in Table2 (determinis- tic) and Table 3 (stochastic), respectively. It is clearly seen that, no matter how the concept changes, our proposed method (SE) greatly improves the mean square error of the positive class in both de- terministic and stochastic data streams.... ..."

Cited by 2

### Table 3: Mean Squared Error on Stochastic Stream Data Decision Trees Naive Bayes Logistic Regression Changes

2007

"... In PAGE 8: ...5, Naive Bayes, and logistic regression as base learners. The results are shown in Table 2 (determinis- tic) and Table3 (stochastic), respectively. It is clearly seen that, no matter how the concept changes, our proposed method (SE) greatly improves the mean square error of the positive class in both de- terministic and stochastic data streams.... ..."

Cited by 2

### Table 7 DA classification using prosodic decision trees (chance = 35%).

"... In PAGE 17: ... For the purpose of model integration, the likelihoods of the Other class were assigned to all DA types comprised by that class. As shown in Table7 , the tree with dialogue grammar performs significantly better than chance on the raw DA distribution, although not as well as the word based methods (cf. Table 6).... ..."