### Table 5. Learning techniques

2003

"... In PAGE 22: ...nderlying technologies, e.g., support vector machines [Cristianini and Shawe-Taylor, 2000]; improved links between reinforcement learning and stochastic control theory [Bertsekas and Tsitsiklis, 1996]; advances in planning and learning methods for stochastic environments [Littamn, 1996; Parr, 1998]; and improved theoretical models of simple genetic algorithms [Vose, 1999]. Major types of learning techniques are summarized in Table5 [Zimmerman and Kambhampati, 2001; Nordlander, 2001]. ... ..."

Cited by 2

### Table 1: Comparison of AUC and AUC-50 values for different learning methods evaluated

"... In PAGE 6: ... This procedure was repeated ten times using diverse sub-samples of nega- tive pairs. The results of the estimation of AUC and AUC-50 scores for the OCC performance evaluation are shown in Table1 where the mean and standard deviation are given. Table 1: Comparison of AUC and AUC-50 values for different learning methods evaluated... In PAGE 14: ... The kernel type and its respective parameters can be varied in this implementation in order to obtain the optimal performance conditions. S5-ListofpotentialnewPPItargets In this supplementary section we list the top 50 new potential PPI targets predicted with the Parzen density OCC approach ( Table1 ). This table includes the following information: Column 1 enumerates the 50 examples; columns 2 and 3 give the systematic ORF names (i.... In PAGE 15: ...imbio.de/ Table1 : List of 50 highly ranked new potential PPI targets predicted by the Parzen OCC method No ID-1 ID-2 P 1 YDR025W YLR029C 0.93420 2 YOL039W YOL139C 0.... ..."

### Table 2: General loss matrix for two-class learning problems.

1999

"... In PAGE 4: ... To incorporate classi cation costs, we can consider manipulating these weights. For example, consider the general two-class loss matrix shown in Table2 . We can place a weight of cost1 on every training example in class 1 and a weight of cost2 every example in class 2.... ..."

Cited by 5

### Table 2: General loss matrix for two-class learning problems.

1999

"... In PAGE 4: ... To incorporate classi cation costs, we can consider manipulating these weights. For example, consider the general two-class loss matrix shown in Table2 . We can place a weight of cost1 on every training example in class 1 and a weight of cost2 every example in class 2.... ..."

Cited by 5

### Table 2: General loss matrix for two-class learning problems.

1999

Cited by 5

### Table 2: Performance of CSP with Look-back General No Learning

1999

"... In PAGE 5: ... We have ob- tained the best results with rel-SATand it is indeed the only algorithm which solves 3 rounds of DES within few minutes. The results are reported in Table2 . SATO can also solve 3 round of DES in few minutes of actual CPU time, but requires over 24 hours of \wall-clock quot; time due to the memory requirements of its data structure: in practice the time is spent simply swapping SATO data back and forth memory and disk.... ..."

Cited by 4

### Table 2: Performance of CSP with Look-back General No Learning

1999

"... In PAGE 5: ... We have ob- tained the best results with rel-SATand it is indeed the only algorithm which solves 3 rounds of DES within few minutes. The results are reported in Table2 . SATO can also solve 3 round of DES in few minutes of actual CPU time, but requires over 24 hours of \wall-clock quot; time due to the memory requirements of its data structure: in practice the time is spent simply swapping SATO data back and forth memory and disk.... ..."

Cited by 4

### Table 7. Classification AUC on the Testing Set (%)

in Advised by:

2005

"... In PAGE 31: ....01,0.05 Evaluation Criteria To evaluate the classification results of the generated models, we use both the AUC and prediction accuracy on the testing set. These results are shown in Table 6 and Table7 . To compare our algorithm with the commonly used classifiers, we use AUC and prediction accuracy on the testing set, as well as the size of reduction in the set of variables.... In PAGE 32: ... Table 5. Characteristics of the PCA Data Set Task Variables #Samples Variable Types Target Variable Type Prostate Cancer 779 326 discretized 5 binary Diagnosis (cancer/normal) Experimental Results and Analysis Table 6 and Table7 present the classification accuracy and AUC for different combinations of the experimental parameters (Comparisions with other meth- ods are shown in Section 7). The results are averaged over cross-validation runs.... In PAGE 32: ...9 (1.9) As shown in Table 6 and Table7 , the optimal parameter configurations for best prediction accuracy (259/67, AUC or Accuracy, Structure I, 0.05) is dif- ferent from the optimal parameter configurations for best AUC (293/33, AUC or Accuracy, Structure I, 0.... ..."

### Table 1: The General Multilevel Model and Its Sub-Models

2002

"... In PAGE 13: ... The general model contains a wide variety of sub-models that are well-known in political science. Table1 lists these models with the components of equation [7] that are required to derive them. Although all of these models are familiar, we shall spend some... ..."

Cited by 2

### Table 7: Predictive results from the three ANNs optimized by AUC:acc, AUC, and accuracy.

2007

"... In PAGE 5: ... Each cell indicates the number of win- draw-loss. AUC:acc ANNAUC ANNacc ANNAUC:acc 8-11-1 12-8-0 ANNAUC 10-9-1 AUC ANNAUC ANNacc ANNAUC:acc 8-11-1 12-8-0 ANNAUC 10-9-1 acc ANNAUC ANNacc ANNAUC:acc 8-12-0 11-8-1 ANNAUC 11-8-1 Note that in Table7 the predictive results of different mod- els on each dataset can only be compared vertically because it is not meaningful to compare results horizontally as values of accuracy, AUC,andAUC:acc are not comparable. We perform a paired t-test with the 95% confidence level on each of the 20 datasets comparing the models of ANNAUC:acc, ANNAUC and ANNacc, measured by AUC:acc, AUC and accuracy, respectively.... In PAGE 5: ... The data in each cell indi- cates the win-draw-loss number of datasets that the model in the corresponding row over the model in the correspond- ing column. Several interesting conclusions can be drawn from the results in Table7 and 8. Clearly, the result shows that the ANNAUC:acc model performs significantly better than ANNAUC and ANNacc,andANNAUC performs significantly better than ANNacc in terms of the three different measures.... ..."

Cited by 2