### Table 1: Comparison between ADCM, K-means, Hierarchical clustering and Self-Organizing Maps

"... In PAGE 7: ... Some clusters of ADCM merged or been partitioned when using K-means and some cluster radius changed significantly. Comparison between the result of ADCM and K-means was listed in Table1 . Note that the average noise and radius of the result of ADCM are smaller than that of K-means.... ..."

### Table VI. Estimated number of clusters by consensus clustering (CC) and by the Gap statistic, in combination with hierarchical clustering (HC) and self-organizing map (SOM). Application to gene-expression data. In parentheses is the estimated number of clusters based on visual inspection of the consensus matrices (when this differ from the one based on the consensus distribution).

Cited by 1

### Table IV. Estimated number of clusters by consensus clustering (CC) and by the Gap statistic, in combination with hierarchical clustering (HC) and self-organizing map (SOM). Application to simulated data. The numbers between parentheses represent local maxima of the Gap statistic (see text).

Cited by 1

### Table 5. Mean values and deviations of quantization error of different algorithms. Most left column indicates how many clusters were formed. Abbreviations: mn = mean value, dv = deviation, HKM = hard k-means clustering algorithm, FKM = fuzzy k-means clustering algorithm, SOM = Kohonen self-organizing feature map.

### Table 1. Algorithms for self-organizing modeling

"... In PAGE 2: ... nonparametric models Known nonparametric model selection methods include: Analog Complexing (AC) which selects nonparametric prediction models from a given data set representing one or more patterns of a trajectory of past behavior which are analogous to a chosen reference pattern and Objective Cluster Analysis (OCA). Table1 shows some data mining functions and more appropriate self-organizing modeling algorithms for addressing these functions. Table 1.... ..."

### Table 7: Kohonen self-organizing feature map of the iris data.

in Input Data Coding in Multivariate Data Analysis: Techniques and Practice in Correspondence Analysis

"... In PAGE 11: ..., 1997). Table7 shows a Kohonen map of the original Fisher iris data. The user can trace a curve separating observation sequence numbers less than or equal to 50 (class 1), from 51 to 100 (class 2), and above 101 (class 3).... In PAGE 11: ... The zero values indicate map nodes or units with no assigned observations. The map of Table7 , as for Table 8, has 20 20 units. The number of epochs used in training was in both cases 133.... In PAGE 11: ...ersion of the Fisher data. The user in this case can demarcate class 1 observations. Classes 2 and 3 are more confused, even if large contiguous \islands quot; can be found for both of these classes. The result shown in Table 8 is degraded compared to Table7 . It is not badly degraded though insofar as sub-classes of classes 2 and 3 are found in adjacent and contiguous areas.... ..."

### Table 1. Packet description for the Kohonen Self-Organizing Map implementation.

2002

"... In PAGE 3: ... After having read the sensor values, each unit compares those values with its internal values, stored in the randomly initialised prototype vectors and calculates the Euclidean distance between both vectors. A packet is then created and broadcast across the network with the elements as they are listed in Table1 . The timestamp is provided to eliminate outdated packages.... ..."

Cited by 9