### Table 1. Packet description for the Kohonen Self-Organizing Map implementation.

2002

"... In PAGE 3: ... After having read the sensor values, each unit compares those values with its internal values, stored in the randomly initialised prototype vectors and calculates the Euclidean distance between both vectors. A packet is then created and broadcast across the network with the elements as they are listed in Table1 . The timestamp is provided to eliminate outdated packages.... ..."

Cited by 9

### Table 7: Kohonen self-organizing feature map of the iris data.

in Input Data Coding in Multivariate Data Analysis: Techniques and Practice in Correspondence Analysis

"... In PAGE 11: ..., 1997). Table7 shows a Kohonen map of the original Fisher iris data. The user can trace a curve separating observation sequence numbers less than or equal to 50 (class 1), from 51 to 100 (class 2), and above 101 (class 3).... In PAGE 11: ... The zero values indicate map nodes or units with no assigned observations. The map of Table7 , as for Table 8, has 20 20 units. The number of epochs used in training was in both cases 133.... In PAGE 11: ...ersion of the Fisher data. The user in this case can demarcate class 1 observations. Classes 2 and 3 are more confused, even if large contiguous \islands quot; can be found for both of these classes. The result shown in Table 8 is degraded compared to Table7 . It is not badly degraded though insofar as sub-classes of classes 2 and 3 are found in adjacent and contiguous areas.... ..."

### Table 1. Performance Benchmarks for the Self-organizing Superimposition (SOS) and the SPE9 Algorithms on the Four Molecules (See Fig. 2) Under Test.

"... In PAGE 5: ... These values serve as indicators of the quality of the generated conformations. The average values for these deviations over the 10,000 gen- erated conformations, along with the computing times, are pro- vided in Table1 for the four molecules under test. One can see from the table that in all cases the SOS algorithm took less time and generated conformations of higher quality (smaller bond and angle deviations) than the SPE algorithm9 did, indicating that the former achieved a faster convergence rate.... In PAGE 5: ... Some randomly chosen 3D conformations generated by the SOS algorithm are shown in Figure 2, which can be visually confirmed to have sensible geometries. Although Table1 indi- cates that the generated conformations still exhibit some devia- tions with respect to the ideal (reference) geometric parameters, the deviations are not severe if one considers the consistency of these reference parameters over different sets of rules. For example, in some rule-based parameter set widely used in con- formational sampling programs13 including SPE,9 all C27 bonds between two carbon atoms have a length of 1.... ..."

### Table 1 Examples for the evaluation of a self-organizing map: the mean values of four di erent variables at the typical signal (sg) and background (bg) nodes and their ratios

"... In PAGE 4: ... Other hints are the occurence of the minimum or maximum value of one variable at one of the selected nodes or a low di erence between the minimum or the maximum and the mean values. Table1 gives an example for such an analysis for four variables of the self-organizing map mentioned before. A second and easier method was to look separately at the distribution of... ..."

### Table 1: Comparison between ADCM, K-means, Hierarchical clustering and Self-Organizing Maps

"... In PAGE 7: ... Some clusters of ADCM merged or been partitioned when using K-means and some cluster radius changed significantly. Comparison between the result of ADCM and K-means was listed in Table1 . Note that the average noise and radius of the result of ADCM are smaller than that of K-means.... ..."

### Table 1. Algorithms for self-organizing modeling (see [19] for such classification) Data Mining functions Algorithm

"... In PAGE 3: ... In a wider sense, the spectrum of self-organizing modeling contains regression-based methods, rule-based methods, symbolic modeling and nonparametric model selection methods. Table1 shows some data mining functions and more appropriate SOM algorithms for addressing these functions (FRI: Fuzzy rule induction using GMDH, AC: Analog Complexing). Table 1.... ..."

### Table 1. Algorithms for self-organizing modeling

"... In PAGE 2: ... nonparametric models Known nonparametric model selection methods include: Analog Complexing (AC) which selects nonparametric prediction models from a given data set representing one or more patterns of a trajectory of past behavior which are analogous to a chosen reference pattern and Objective Cluster Analysis (OCA). Table1 shows some data mining functions and more appropriate self-organizing modeling algorithms for addressing these functions. Table 1.... ..."

### Table 5. Area and Critical Path Delay overheads for Self-organizing Lists based Encoder and Decoder Area(lib. cells) Delay (ns)

2001

"... In PAGE 5: ... Each encoding scheme incurs some area and delay over- head. Table5 compares the area(number of library cells) and the delay(ns) of encoders and decoders that are based on MTF and TR techniques with those based on other tech- niques. The designs were synthesized using Synopsys Design Compiler on a 0.... ..."

Cited by 11

### Table 1: Categories of models of visual cortical maps, and their abbreviations as used in this article. Two versions of the self-organizing map model were investi- gated: SOM-h (high-dimensional weight vectors) and SOM-l (low-dimensional feature vectors).

1995

"... In PAGE 8: ... In- creasingly detailed comparisons between model and experimental data will be included along with each point. To ease comparisons, we group models into categories based on similarities in goals or implementation, Table1 . Structural and spectral models attempt to characterize map patterns using schematic drawings or concise equations.... In PAGE 26: ... The results of our comparison between model predictions and experimental data obtained from the upper layers of macaque striate cortex are summarized in table 2. References to articles on each model are given in Table1 . Many of the models are also brie y described in the appendix.... ..."

Cited by 64

### Table 1: Mean square error for neuron weights and stan- dard deviation for probability density estimates ( ) 6 Conclusions A new integrally distributed self-organizing learning al- gorithm for the class of neural networks introduced by Kohonen [1] was presented. The algorithm converges to an equiproblable topology preserving map for arbitrarily distributed input signals. It is shown that Kohonen apos;s al- gorthim converges to a locally a ne self-organizing map. Similations results agree with theoretical predictions.

"... In PAGE 5: ... As expected, the results of the three algo- rithms are fairly similar, for the case of a uniformly dis- tributed input signal. Table1 contains the mean square error for the neuron weights and the standard deviation of the probability density estimate vector, ^ p, for both simu- lations.It should be noted that the improvement in perfor- mance comes at an increase in computational cost.... ..."