### Table 7: Kohonen self-organizing feature map of the iris data.

in Input Data Coding in Multivariate Data Analysis: Techniques and Practice in Correspondence Analysis

"... In PAGE 11: ..., 1997). Table7 shows a Kohonen map of the original Fisher iris data. The user can trace a curve separating observation sequence numbers less than or equal to 50 (class 1), from 51 to 100 (class 2), and above 101 (class 3).... In PAGE 11: ... The zero values indicate map nodes or units with no assigned observations. The map of Table7 , as for Table 8, has 20 20 units. The number of epochs used in training was in both cases 133.... In PAGE 11: ...ersion of the Fisher data. The user in this case can demarcate class 1 observations. Classes 2 and 3 are more confused, even if large contiguous \islands quot; can be found for both of these classes. The result shown in Table 8 is degraded compared to Table7 . It is not badly degraded though insofar as sub-classes of classes 2 and 3 are found in adjacent and contiguous areas.... ..."

### Table 1. Algorithms for self-organizing modeling

"... In PAGE 2: ... nonparametric models Known nonparametric model selection methods include: Analog Complexing (AC) which selects nonparametric prediction models from a given data set representing one or more patterns of a trajectory of past behavior which are analogous to a chosen reference pattern and Objective Cluster Analysis (OCA). Table1 shows some data mining functions and more appropriate self-organizing modeling algorithms for addressing these functions. Table 1.... ..."

### Table 1. Comparison of Self-Organizing Map and Rigid Map. denotes the distance metric between poses, is the learning rate. The neighborhood function ensures that the nodes are updated in proportion to their distance from the current winner w.

### Table 1. Algorithms for self-organizing modeling (see [19] for such classification) Data Mining functions Algorithm

"... In PAGE 3: ... In a wider sense, the spectrum of self-organizing modeling contains regression-based methods, rule-based methods, symbolic modeling and nonparametric model selection methods. Table1 shows some data mining functions and more appropriate SOM algorithms for addressing these functions (FRI: Fuzzy rule induction using GMDH, AC: Analog Complexing). Table 1.... ..."

### Table 6. Robustness properties of distance functions used in self-organizing maps quantified in terms of the values of V (the results shown here are averaged over 100 experiments)

"... In PAGE 25: ... We regard it as a function of the standard deviation of the noise being added to the data V(e) = g229 g229 g61 g61 g45 g43 g45 g61 g45 N 1 k N 1 k } ) k ( apos; j ) k ( j ) k ( apos; i ) k ( i { apos; I I where (i apos;(k),j apos;(k)) is the location of the distorted (noisy) version of the k-th data point. It becomes evident, Table6 , that the Hamming distance yields a high level of robustness as the values of V go up at a lower rate as it happens for the two other distance functions. 5 Building information granules of software measures In the previous section, we have showed that the clusters grown through the merging of the similar entries of the map consist of a certain number of software modules and exhibit certain diversity in the values of the software measures.... ..."

Cited by 1

### Table 1. Packet description for the Kohonen Self-Organizing Map implementation.

2002

"... In PAGE 3: ... After having read the sensor values, each unit compares those values with its internal values, stored in the randomly initialised prototype vectors and calculates the Euclidean distance between both vectors. A packet is then created and broadcast across the network with the elements as they are listed in Table1 . The timestamp is provided to eliminate outdated packages.... ..."

Cited by 9

### Table 1. Performance Benchmarks for the Self-organizing Superimposition (SOS) and the SPE9 Algorithms on the Four Molecules (See Fig. 2) Under Test.

"... In PAGE 5: ... These values serve as indicators of the quality of the generated conformations. The average values for these deviations over the 10,000 gen- erated conformations, along with the computing times, are pro- vided in Table1 for the four molecules under test. One can see from the table that in all cases the SOS algorithm took less time and generated conformations of higher quality (smaller bond and angle deviations) than the SPE algorithm9 did, indicating that the former achieved a faster convergence rate.... In PAGE 5: ... Some randomly chosen 3D conformations generated by the SOS algorithm are shown in Figure 2, which can be visually confirmed to have sensible geometries. Although Table1 indi- cates that the generated conformations still exhibit some devia- tions with respect to the ideal (reference) geometric parameters, the deviations are not severe if one considers the consistency of these reference parameters over different sets of rules. For example, in some rule-based parameter set widely used in con- formational sampling programs13 including SPE,9 all C27 bonds between two carbon atoms have a length of 1.... ..."

### Table 1: Mean square error for neuron weights and stan- dard deviation for probability density estimates ( ) 6 Conclusions A new integrally distributed self-organizing learning al- gorithm for the class of neural networks introduced by Kohonen [1] was presented. The algorithm converges to an equiproblable topology preserving map for arbitrarily distributed input signals. It is shown that Kohonen apos;s al- gorthim converges to a locally a ne self-organizing map. Similations results agree with theoretical predictions.

"... In PAGE 5: ... As expected, the results of the three algo- rithms are fairly similar, for the case of a uniformly dis- tributed input signal. Table1 contains the mean square error for the neuron weights and the standard deviation of the probability density estimate vector, ^ p, for both simu- lations.It should be noted that the improvement in perfor- mance comes at an increase in computational cost.... ..."

### Table 1: Comparison between ADCM, K-means, Hierarchical clustering and Self-Organizing Maps

"... In PAGE 7: ... Some clusters of ADCM merged or been partitioned when using K-means and some cluster radius changed significantly. Comparison between the result of ADCM and K-means was listed in Table1 . Note that the average noise and radius of the result of ADCM are smaller than that of K-means.... ..."

### Table 4: Absolute IC difference before self-organization between sequential (SEQ) and random (RND) initialization.

"... In PAGE 7: ...3.1 Results and Discussion Table4 distinctly shows an improvement in absolute interference cost (IC) reduction across the wireless mesh region for different node densities- This improvement is obtained by using the proposed random initialization algorithm in comparison to the sequential algorithm, which translates to an improvement in the overall capacity. Table 4: Absolute IC difference before self-organization between sequential (SEQ) and random (RND) initialization.... ..."