### Table 2: K-Nearest Neighbors Precision and Recall

2004

"... In PAGE 5: ... For K-NN k = 4 gave the best results over a large range of k, and we expect this k would be ideal for stories of similar length. As shown in Table2 , despite its simplicity this al- gorithm performs fairly well. It is not surprising that features based primarily on word distributions such as LSA could correctly discriminate the non-poor from the poor rewritten stories.... ..."

Cited by 5

### Table 2: K-nearest-neighbor coding performance

1996

"... In PAGE 5: ... 3 Results 3.1 K-nearest-neighbor Results Table2 shows k-nearest-neighbor performance on the ve measures described above for the baseline, for the best docu-... In PAGE 6: ...1.1 K-nearest-neighbor baseline accuracy The rows labeled Base in Table2 show performance for the baseline condition. Average 11-point precision for full codes in the baseline condition is 37.... In PAGE 6: ... However, in all of the tuning experi- ments reported in this paper, we maximized average preci- sion in the tuning set, since this is the only measure that summarizes the performance of the full ordering of codes. As can be seen in the Table2 in the row labeled Princ, this weighting scheme produced a 2.7% increase in average precision over the baseline, a 26.... In PAGE 6: ...1.3 Structured queries Table2 also shows the results when the test document is converted into a query which is a weighted sum of sections. Formulating the query as a weighted sum with weights of 1, combined with a principal DX weight of 1.... ..."

Cited by 96

### Table 1. Confusion matrices for generalised linear model (a), k-nearest neighbour (b), multilayer perceptron (c) and support vector machine (d) classification of acceptable and unacceptable images.

2002

"... In PAGE 5: ... We adopt a 10-fold cross-validation strategy to obtain an almost unbiased estimate of generalisation performance [13]. Table1 shows the composite confu-... ..."

Cited by 11

### Table 1: Results for k-Nearest Neighbor

1993

"... In PAGE 4: ...demonstrating poor results on the original set of 564 features, the set was reduced to a smaller set of 223 features that showed showed some signi cance as mea- sured by standard statistical tests. In Table1 , we list the results for k-nearest neighbor, where k is varied from 1 to 25. 3.... ..."

Cited by 4

### Table 2 shows k-nearest-neighbor performance on the ve measures described above for the baseline, for the best docu-

1996

"... In PAGE 6: ...7 +6:6 72.9 +5:7 Table2 : K-nearest-neighbor coding performance ment-score weighting condition, and for the weighted sum condition with equal weights on all sections, and for the weighted sum condition with tuned weights on each section. The weighted sum conditions included the document score weighting.... In PAGE 6: ...1.1 K-nearest-neighbor baseline accuracy The rows labeled Base in Table2 show performance for the baseline condition. Average 11-point precision for full codes in the baseline condition is 37.... In PAGE 6: ... However, in all of the tuning experi- ments reported in this paper, we maximized average preci- sion in the tuning set, since this is the only measure that summarizes the performance of the full ordering of codes. As can be seen in the Table2 in the row labeled Princ, this weighting scheme produced a 2.7% increase in average precision over the baseline, a 26.... In PAGE 6: ...1.3 Structured queries Table2 also shows the results when the test document is converted into a query which is a weighted sum of sections. Formulating the query as a weighted sum with weights of 1, combined with a principal DX weight of 1.... ..."

Cited by 96

### Table 7: Sparse Bayesian results compared to other methods reported in Lewis et al. (2003). SVM refers to Support Vector Machines. kNN refers to k-nearest neighbor. Results for the RCV1-v2 collection, 101 topics categories.

2003

"... In PAGE 16: ... We were thus able to increase the number of features for the logistic regression model up to 3,000. Table7 shows average results and provides a comparison with several other methods reported in Lewis et al. (2003).... ..."

Cited by 4

### Table 3: Performance summary of the k-nearest neighbor regression models.

### Table 1. Symbols in the context of k-nearest neighbor search

### Table 1. Symbols in the context of k-nearest neighbor search

### Table 2. Experimental results for instance cloning local naive Bayes (ICLNB) versus instance-based k-nearest neighbor (KNN), instance-based k-nearest neighbor with dis- tance weighted (KNNDW), instance-based k-nearest neighbor naive Bayes (KNNNB), locally weighted naive Bayes (LWNB) and naive Bayes (NB): percentage of correct classifications and standard deviation when k =5.

"... In PAGE 8: ...05 level according to the corrected two-tailed t-test [7]. Table2 and 3 show the accuracy and standard deviations of each algorithm on each data set, and the average accuracy and deviation over all the data sets are summarized at the bottom of the table. Table 4 and 5 shows the results of two-tailed t-test between each pair of algorithms, and each entry w/t/l means that the algorithm at the corresponding row wins in w data sets, ties in t data sets, and loses in l data sets, compared to the algorithm at the corresponding column.... In PAGE 8: ... Table 4 and 5 shows the results of two-tailed t-test between each pair of algorithms, and each entry w/t/l means that the algorithm at the corresponding row wins in w data sets, ties in t data sets, and loses in l data sets, compared to the algorithm at the corresponding column. The detailed results displayed in Table2 and 3 show that our algorithm out- performs all the other algorithms significantly. Now, we summarize the highlights as follows: 1.... ..."