### Table 2: Average AUC and R50 scores. LR: Logistic regression; NB: Naive Bayes; RF: Random Forest; SVM: Support Vector Machine; ME: mixture of feature experts.

"... In PAGE 3: ...ere, we use 50 as a cut-off, i.e. R50 is a partial AUC score that measures the area under the ROC curve until reaching 50 negative predictions. Table2 lists the AUC score and R50 scores of the six methods. As can be seen the ME method achieves the best values for both criteria.... ..."

### Table 7: (U)Effectiveness of sparse Bayesian models compared with that of standard text categorization approaches. The results for SVM (support vector machine), kNN (k-nearest neighbor), and Rocchio classifiers are as reported by Lewis, et al (2003). They use a different text representation than our experiments and so the comparison is not purely of learning algorithms. F1 values are averages over 101 Topics categories on the RCV1-v2 collection.

2003

Cited by 4

### Table 8: Other classifiers, ANU.

2003

"... In PAGE 7: ...5 we use support vector machines (SVM) and K-nearest neighbor (Knn). Results are in Table8 and Table 9. None of the other classifiers provides a significant advantage over C4.... ..."

Cited by 12

### Table 3: Mean test error rates (standard errors) over 50 simulations, from various cancer microarray data sets. SVM (OVO) is the support vector machine, using the one-versus-one approach; each pairwise classifier uses a large value for the cost parameter, to yield the maximal margin classifier; MT are the margin tree methods, with different tree-building strategies.

"... In PAGE 8: ... The sampling was done in a stratified way to retain balance of the class sizes. This entire process was repeated 50 times, and the mean and standard errors of the test set misclassification rates are shown in Table3 . The nearest centroid method is as described in... ..."

Cited by 1

### Table 7: Sparse Bayesian results compared to other methods reported in Lewis et al. (2003). SVM refers to Support Vector Machines. kNN refers to k-nearest neighbor. Results for the RCV1-v2 collection, 101 topics categories.

2003

"... In PAGE 16: ... We were thus able to increase the number of features for the logistic regression model up to 3,000. Table7 shows average results and provides a comparison with several other methods reported in Lewis et al. (2003).... ..."

Cited by 4

### Table 4: The nearest neighbors of the 13 proteins misclassified by design function. The number of native protein support vectors among the top 3, 5, and 11 nearest neighbors (NNs) are listed. Except protein 1bx7, the majority of nearest neighbors of all misclassified proteins are decoys.

"... In PAGE 13: ... We calculate the Euclidean distance of each of the 13 proteins from the 220 native protein and 1,685 decoys that participate in the kernel design scoring function. The results are shown in Table4 , where the number of native proteins among the top 3, 5, and 11 nearest neighboring vectors to the failed protein are listed. Except protein 1bx7, all misclassifications are due to native vectors being too close to decoys.... ..."

Cited by 5

### Table 1. Classification accuracies for the Lipschitz classifier and Support Vector Machine on ten 2-dimensional randomly generated test sets. Each test set contains 400 data points.

### Table 8.1. Classification results with different techniques: SVM (support vector machine), k-NN (k-nearest neighbour), PNN (probabilistic neural network) Classifier Correct classification rate Time for set-up Time for classification

1995

Cited by 1

### Table 3. Results of 10-fold cross-validations, according to emotions and types of classifiers.

"... In PAGE 11: ...the types of classifier: Naive Bayes, Random Forest and Support Vector Machine. Table3 contains the resulting precision and recall measures for each category and anti-category (e.g.... ..."

### Table 1: Ten random words and their nearest neighbors.

1993

Cited by 40