### Table 1: Results for k-Nearest Neighbor

1993

"... In PAGE 4: ...demonstrating poor results on the original set of 564 features, the set was reduced to a smaller set of 223 features that showed showed some signi cance as mea- sured by standard statistical tests. In Table1 , we list the results for k-nearest neighbor, where k is varied from 1 to 25. 3.... ..."

Cited by 4

### Table 8: Nearest Neighbors Method

"... In PAGE 11: ...Table 8: Nearest Neighbors Method for every two objects x; y 2 U the Euclidean distance is de ned as: E (x; y) = qPa2A (a (x) ? a (y))2: For di erent number of nearest neighbors k we obtain the leave-one-out results presented in Table8 and on the Figure 1. Nearest Neighbors Method 0 20 40 60 80 100 1 2 3 4 5 6 Classification accuracy Figure 1: Nearest Neighbors Method 7 Conclusions The diabetes mellitus data set has been drawn from a real life medical problem.... ..."

### TABLE III COMPARISON OF ACCURACY AND EFFICIENCY ON ONE NEAREST NEIGHBOR CLIP QUERIES.

### TABLE III COMPARISON OF ACCURACY AND EFFICIENCY ON ONE NEAREST NEIGHBOR CLIP QUERIES.

### Table 2. Recognition results with 90% confidence interval. HBN : hierarchical bayesian network, NBC : naive bayesian classifier, KNN : k-nearest neighbor classifier, NN : backpropagation neural network

### Table 3. K-NN algorithm to compute the exact K nearest neighbors of a query time series Q using a multi- dimensional index structure

"... In PAGE 16: ...his function is visualized in Fig. 12. Having defined LB_PAA and MINDIST(Q,R), we are now ready to introduce the K-nearest neighbor search (K-NN) algorithm. The basic algorithm is shown in Table3 . It is an optimization on the GEMINI K-NN algorithm (Faloutsos et al.... In PAGE 17: ... Like the classic K-NN algorithm (Roussopou- los et al. 1995), the algorithm in Table3 uses a priority queue to visit nodes/objects in the index in the increasing order of their distances from Q in the indexed (i.e.... ..."

### Table 3. K-NN algorithm to compute the exact K nearest neighbors of a query time series Q using a multi- dimensional index structure

"... In PAGE 16: ...his function is visualized in Fig. 12. Having defined LB_PAA and MINDIST(Q,R), we are now ready to introduce the K-nearest neighbor search (K-NN) algorithm. The basic algorithm is shown in Table3 . It is an optimization on the GEMINI K-NN algorithm (Faloutsos et al.... In PAGE 17: ... Like the classic K-NN algorithm (Roussopou- los et al. 1995), the algorithm in Table3 uses a priority queue to visit nodes/objects in the index in the increasing order of their distances from Q in the indexed (i.e.... ..."

### Table 1. Matching of the subjects in target and nearest neighbors

2001

"... In PAGE 7: ... Specifically, we computed the p = 20 closest neighbors to the target, and calculated the percentage of neighbors from the same or a hierarchically related class. The re- sults are presented in Table1 . The first column indicates the class label of the target document; whereas the second and third columns indicate some statistics of the class distri- butions of the search results for the textual and conceptual neighbors respectively for all levels of the Y ahoo! hierar- chy which are related to the target.... In PAGE 7: ... Clearly the percentage of matching neighbors would always be higher while trying to make a partial match with a hierarchically related node. The values reported in each entry of Table1 are determined by averag- ing over all targets in the corresponding Y ahoo! class. It is also apparent that it is goodness for these accuracy numbers to be as high as possible, if we assume that Y ahoo! class labels reflect topical behavior well.... In PAGE 7: ... It is also apparent that it is goodness for these accuracy numbers to be as high as possible, if we assume that Y ahoo! class labels reflect topical behavior well. As illustrated in Table1 , an exact match between the class labels of the target document and the nearest neighbors was found a very small percentage of the time for the textual nearest neighbor. We note that we are only using the match- ing percentage of class labels of an unsupervised similar- ity search procedure in order to demonstrate the qualitative advantages of conceptual similarity.... ..."

Cited by 5

### Table 1. Matching of the subjects in target and nearest neighbors

2001

"... In PAGE 7: ... Specifically, we computed the p = 20 closest neighbors to the target, and calculated the percentage of neighbors from the same or a hierarchically related class. The re- sults are presented in Table1 . The first column indicates the class label of the target document; whereas the second and third columns indicate some statistics of the class distri- butions of the search results for the textual and conceptual neighbors respectively for all levels of the Y ahoo! hierar- chy which are related to the target.... In PAGE 7: ... Clearly the percentage of matching neighbors would always be higher while trying to make a partial match with a hierarchically related node. The values reported in each entry of Table1 are determined by averag- ing over all targets in the corresponding Y ahoo! class. It is also apparent that it is goodness for these accuracy numbers to be as high as possible, if we assume that Y ahoo! class labels reflect topical behavior well.... In PAGE 7: ... It is also apparent that it is goodness for these accuracy numbers to be as high as possible, if we assume that Y ahoo! class labels reflect topical behavior well. As illustrated in Table1 , an exact match between the class labels of the target document and the nearest neighbors was found a very small percentage of the time for the textual nearest neighbor. We note that we are only using the match- ing percentage of class labels of an unsupervised similar- ity search procedure in order to demonstrate the qualitative advantages of conceptual similarity.... ..."

Cited by 5

### Table 2: K-nearest-neighbor coding performance

1996

"... In PAGE 5: ... 3 Results 3.1 K-nearest-neighbor Results Table2 shows k-nearest-neighbor performance on the ve measures described above for the baseline, for the best docu-... In PAGE 6: ...1.1 K-nearest-neighbor baseline accuracy The rows labeled Base in Table2 show performance for the baseline condition. Average 11-point precision for full codes in the baseline condition is 37.... In PAGE 6: ... However, in all of the tuning experi- ments reported in this paper, we maximized average preci- sion in the tuning set, since this is the only measure that summarizes the performance of the full ordering of codes. As can be seen in the Table2 in the row labeled Princ, this weighting scheme produced a 2.7% increase in average precision over the baseline, a 26.... In PAGE 6: ...1.3 Structured queries Table2 also shows the results when the test document is converted into a query which is a weighted sum of sections. Formulating the query as a weighted sum with weights of 1, combined with a principal DX weight of 1.... ..."

Cited by 96