### Table 1. Packet description for the Kohonen Self-Organizing Map implementation.

2002

"... In PAGE 3: ... After having read the sensor values, each unit compares those values with its internal values, stored in the randomly initialised prototype vectors and calculates the Euclidean distance between both vectors. A packet is then created and broadcast across the network with the elements as they are listed in Table1 . The timestamp is provided to eliminate outdated packages.... ..."

Cited by 9

### Table 7: Kohonen self-organizing feature map of the iris data.

in Input Data Coding in Multivariate Data Analysis: Techniques and Practice in Correspondence Analysis

"... In PAGE 11: ..., 1997). Table7 shows a Kohonen map of the original Fisher iris data. The user can trace a curve separating observation sequence numbers less than or equal to 50 (class 1), from 51 to 100 (class 2), and above 101 (class 3).... In PAGE 11: ... The zero values indicate map nodes or units with no assigned observations. The map of Table7 , as for Table 8, has 20 20 units. The number of epochs used in training was in both cases 133.... In PAGE 11: ...ersion of the Fisher data. The user in this case can demarcate class 1 observations. Classes 2 and 3 are more confused, even if large contiguous \islands quot; can be found for both of these classes. The result shown in Table 8 is degraded compared to Table7 . It is not badly degraded though insofar as sub-classes of classes 2 and 3 are found in adjacent and contiguous areas.... ..."

### Table 1: Mean square error for neuron weights and stan- dard deviation for probability density estimates ( ) 6 Conclusions A new integrally distributed self-organizing learning al- gorithm for the class of neural networks introduced by Kohonen [1] was presented. The algorithm converges to an equiproblable topology preserving map for arbitrarily distributed input signals. It is shown that Kohonen apos;s al- gorthim converges to a locally a ne self-organizing map. Similations results agree with theoretical predictions.

"... In PAGE 5: ... As expected, the results of the three algo- rithms are fairly similar, for the case of a uniformly dis- tributed input signal. Table1 contains the mean square error for the neuron weights and the standard deviation of the probability density estimate vector, ^ p, for both simu- lations.It should be noted that the improvement in perfor- mance comes at an increase in computational cost.... ..."

### Table 1: Categories of models of visual cortical maps, and their abbreviations as used in this article. Two versions of the self-organizing map model were investi- gated: SOM-h (high-dimensional weight vectors) and SOM-l (low-dimensional feature vectors).

1995

"... In PAGE 8: ... In- creasingly detailed comparisons between model and experimental data will be included along with each point. To ease comparisons, we group models into categories based on similarities in goals or implementation, Table1 . Structural and spectral models attempt to characterize map patterns using schematic drawings or concise equations.... In PAGE 26: ... The results of our comparison between model predictions and experimental data obtained from the upper layers of macaque striate cortex are summarized in table 2. References to articles on each model are given in Table1 . Many of the models are also brie y described in the appendix.... ..."

Cited by 64

### Table 2: Mean NNLs and cosines (in parentheses) on testing data for models with self-organization.

"... In PAGE 4: ...785) which makes it worse than most models using self-organization (cf. Table2 ). We acknowledge that in principle, the SRN may be able to achieve better perfor- mance by varying different parameters: e.... ..."

### Table 1. Algorithms for self-organizing modeling (see [19] for such classification) Data Mining functions Algorithm

"... In PAGE 3: ... In a wider sense, the spectrum of self-organizing modeling contains regression-based methods, rule-based methods, symbolic modeling and nonparametric model selection methods. Table1 shows some data mining functions and more appropriate SOM algorithms for addressing these functions (FRI: Fuzzy rule induction using GMDH, AC: Analog Complexing). Table 1.... ..."

### Table 1: Comparison between ADCM, K-means, Hierarchical clustering and Self-Organizing Maps

"... In PAGE 7: ... Some clusters of ADCM merged or been partitioned when using K-means and some cluster radius changed significantly. Comparison between the result of ADCM and K-means was listed in Table1 . Note that the average noise and radius of the result of ADCM are smaller than that of K-means.... ..."

### Table 1. Algorithms for self-organizing modeling

"... In PAGE 2: ... nonparametric models Known nonparametric model selection methods include: Analog Complexing (AC) which selects nonparametric prediction models from a given data set representing one or more patterns of a trajectory of past behavior which are analogous to a chosen reference pattern and Objective Cluster Analysis (OCA). Table1 shows some data mining functions and more appropriate self-organizing modeling algorithms for addressing these functions. Table 1.... ..."

### Table I:. Algorithms for self-organizing modeling

### TABLE III COMPARISON OF THE MOTIF SETS FROM SELF-ORGANIZING NEURAL NETWORK METHOD AND MEME METHOD, NOS = NUMBER OF SAMPLES IN THE MOTIF SET.

Cited by 1