### Table 8: Nearest Neighbors Method

"... In PAGE 11: ...Table 8: Nearest Neighbors Method for every two objects x; y 2 U the Euclidean distance is de ned as: E (x; y) = qPa2A (a (x) ? a (y))2: For di erent number of nearest neighbors k we obtain the leave-one-out results presented in Table8 and on the Figure 1. Nearest Neighbors Method 0 20 40 60 80 100 1 2 3 4 5 6 Classification accuracy Figure 1: Nearest Neighbors Method 7 Conclusions The diabetes mellitus data set has been drawn from a real life medical problem.... ..."

### Table 1. The Condensed Nearest Neighbor Algorithm (CNN).

"... In PAGE 3: ...inimal consistent subset (i.e., the subset with the minimum cardinality) to minimize the cost of storage and computation. The CNN algorithm is given in Table1 . Starting from an empty stored subset, we pass one by one over the patterns and add a pattern to the subset if it cannot be classified correctly with the already stored subset.... ..."

### Table 4. Fast classification with numerosity reduction.

2006

"... In PAGE 6: ... This is easy since only those instances that chose op as their nearest neighbors will be affected and their corresponding entries need to be updated. The algorithm is given in Table4 . As we will show, this optimization speeds up the reduction process by many orders of magnitude.... In PAGE 6: ... In each experiment, we randomly split the data into training set and test set, which have zero intersection. For AWARD approach, we use the training set to learn the thresholding (subset of data to keep) and the corresponding warping window size, which are recorded in table WarpingT as shown in Table4 . The other three approaches follow the same procedure, except that their warping window sizes are fixed (RankFix and RandomFix) at the value that maximizes accuracy for the full training dataset, or they have no warping window size to record (RandomEu).... ..."

Cited by 2

### Table 1: Performance of the fast nearest neighbor search when implemented within the classification scheme of Fisher.

1995

Cited by 16

### Table 2: Performance of the fast nearest neighbor search when implemented within the major classes of the classification scheme of Fisher.

1995

Cited by 16

### Table 1: Performance of the fast nearest neighbor search when implemented within the classification scheme of Fisher.

1995

Cited by 16

### Table 2: Performance of the fast nearest neighbor search when implemented within the major classes of the classification scheme of Fisher.

1995

Cited by 16

### Table 3: Accuracy results of applying nearest neighbor (NN), condensed nearest neighbor (CNN) and simple and weighted voting over three voters are given. Percentage of training stored by CNN is also given to show the gain in memory.

1995

"... In PAGE 8: ... This can only be used with a method like k- NN where e ects are local; one cannot for example train multiple multi-layer perceptrons and combine all hidden units together as one big hidden layer and accept any improvement. By taking a vote over as few as three CNN subsets, in three out of six datasets, although one collectively stores a subset of the training set, one achieves higher classi cation accuracy than the nearest neighbor, where the whole training set is stored ( Table3 ). In GLASS and THYROID, with three... ..."

### Table 2: Storage requirements, average MC1 classification accuracy and average baseline 1-nearest neighbor classifi- cation accuracy using five-fold cross validation.

1994

"... In PAGE 3: ... In a real application, all previously seen instances would be used as training instances. We report in Table2 the storage requirements (the percentage of members of the data set that were retained) and the classification accuracy of the MC1 algorithm, applying five-fold cross validation. These experiments show that MC1, a very simple approach based on random sampling, does quite well on these four data sets, with a large reduction in storage.... ..."

Cited by 123

### Table 8 Classification Results Using a Nearest Neighbor Classifier

1998

"... In PAGE 15: ... That is, each clip is classified by calculating the distance of its feature vector to the mean feature vectors of the five classes and identifying the class to which the distance is the shortest. The classification results using this method for the testing data are shown in Table8 , which are quite poor. The reason is that there are more than one cluster for each audio class in the feature space and not all these clusters are closer to the centroid of this class than those of other classes.... In PAGE 26: ...able 7 Diagonal Entries of Intra and Inter-Cluster Scattering Matrices ...................................37 Table8 Classification Results Using a Nearest Neighbor Classifier.... ..."

Cited by 67