### Table 2. Recognition results with 90% confidence interval. HBN : hierarchical bayesian network, NBC : naive bayesian classifier, KNN : k-nearest neighbor classifier, NN : backpropagation neural network

### Table 1. Symbols in the context of k-nearest neighbor search

### Table 1. Symbols in the context of k-nearest neighbor search

### Table 3 Mean query time (in ms.) for k-nearest neighbor search k

"... In PAGE 22: ...xtent e = 0.005 were used. However, instead of searching for only the single nearest neighbor, we searched for the k-nearest neighbors of each query point. Table3 illustrates the mean query time for the k-nearest neighbor search, k = 2, 4, .... ..."

### Table 1. Matching of the subjects in target and nearest neighbors

2001

"... In PAGE 7: ... Specifically, we computed the p = 20 closest neighbors to the target, and calculated the percentage of neighbors from the same or a hierarchically related class. The re- sults are presented in Table1 . The first column indicates the class label of the target document; whereas the second and third columns indicate some statistics of the class distri- butions of the search results for the textual and conceptual neighbors respectively for all levels of the Y ahoo! hierar- chy which are related to the target.... In PAGE 7: ... Clearly the percentage of matching neighbors would always be higher while trying to make a partial match with a hierarchically related node. The values reported in each entry of Table1 are determined by averag- ing over all targets in the corresponding Y ahoo! class. It is also apparent that it is goodness for these accuracy numbers to be as high as possible, if we assume that Y ahoo! class labels reflect topical behavior well.... In PAGE 7: ... It is also apparent that it is goodness for these accuracy numbers to be as high as possible, if we assume that Y ahoo! class labels reflect topical behavior well. As illustrated in Table1 , an exact match between the class labels of the target document and the nearest neighbors was found a very small percentage of the time for the textual nearest neighbor. We note that we are only using the match- ing percentage of class labels of an unsupervised similar- ity search procedure in order to demonstrate the qualitative advantages of conceptual similarity.... ..."

Cited by 5

### Table 1. Matching of the subjects in target and nearest neighbors

2001

"... In PAGE 7: ... Specifically, we computed the p = 20 closest neighbors to the target, and calculated the percentage of neighbors from the same or a hierarchically related class. The re- sults are presented in Table1 . The first column indicates the class label of the target document; whereas the second and third columns indicate some statistics of the class distri- butions of the search results for the textual and conceptual neighbors respectively for all levels of the Y ahoo! hierar- chy which are related to the target.... In PAGE 7: ... Clearly the percentage of matching neighbors would always be higher while trying to make a partial match with a hierarchically related node. The values reported in each entry of Table1 are determined by averag- ing over all targets in the corresponding Y ahoo! class. It is also apparent that it is goodness for these accuracy numbers to be as high as possible, if we assume that Y ahoo! class labels reflect topical behavior well.... In PAGE 7: ... It is also apparent that it is goodness for these accuracy numbers to be as high as possible, if we assume that Y ahoo! class labels reflect topical behavior well. As illustrated in Table1 , an exact match between the class labels of the target document and the nearest neighbors was found a very small percentage of the time for the textual nearest neighbor. We note that we are only using the match- ing percentage of class labels of an unsupervised similar- ity search procedure in order to demonstrate the qualitative advantages of conceptual similarity.... ..."

Cited by 5

### Table 2: Time of two nearest neighbor query approaches Variation ratio Time (sec., linear) Time (sec., hierarchical)

"... In PAGE 20: ... In [14], we present an e cient algorithm which uses a hierarchical quasi-Voronoi diagram to search for the nearest neighbor. Table2 shows average computation time for each sequence on the SGI INDIGO 2. The time was obtained based on the two di erent nearest neighbor query approaches, namely, the... ..."

### Table 4. Error rates for neural network, k-nearest-neighbor and decision tree

"... In PAGE 14: ... Meanwhile, the combination of output of 5-fold test data for model Mj(-k) is generated as meta-learning training data for stacking purpose. Table4 shows test and prediction error rates for ... ..."

### Table 2. Nearest neighbor search - Hungarian method

2003

"... In PAGE 2: ... This was done by different levels of noise. In the Table2 the im- provement of the candidate matches can be seen. If we have regular objects the procedure works very well.... ..."

Cited by 2

### Table 1: Results for k-Nearest Neighbor

1993

"... In PAGE 4: ...demonstrating poor results on the original set of 564 features, the set was reduced to a smaller set of 223 features that showed showed some signi cance as mea- sured by standard statistical tests. In Table1 , we list the results for k-nearest neighbor, where k is varied from 1 to 25. 3.... ..."

Cited by 4