### Table 4 Cluster accuracy and stability on yeast galactose data

2003

"... In PAGE 8: ... It is interesting that the spherical model of the IMM approach produces unstable clusters at both high and low noise levels. Yeast galactose data Table4 a,b show selected results on cluster accuracy and cluster stability on real yeast galactose data. The true mean column in Table 4a refers to clustering the true mean data R34.... In PAGE 8: ... Yeast galactose data Table 4a,b show selected results on cluster accuracy and cluster stability on real yeast galactose data. The true mean column in Table4 a refers to clustering the true mean data R34.8 Genome Biology 2003, Volume 4, Issue 5, Article R34 Yeung et al.... In PAGE 9: ... The highest level of cluster accuracy (adjusted Rand index = 0.968 in Table4 a) was obtained with several algorithms: centroid linkage hierarchical algorithm with Euclidean dis- tance and averaging over the repeated measurements; hier- archical model-based algorithm (MCLUST-HC); complete linkage hierarchical algorithm with SD-weighted distance; and IMM with complete linkage. Clustering with repeated measurements produced more accurate clusters than clus- tering with the estimated true mean data in most cases.... In PAGE 9: ... Clustering with repeated measurements produced more accurate clusters than clus- tering with the estimated true mean data in most cases. Table4 b shows that different clustering approaches lead to different cluster stability with respect to remeasured data. Similar to the results from the completely synthetic data, Euclidean distance tends to produce more stable clusters than correlation (both variability-weighted and average over repeated measurements).... ..."

### Table 2 Cluster accuracy and stability on the completely synthetic data with four repeated measurements at low noise level

2003

"... In PAGE 6: ... The external knowledge is not used in computing cluster stability. Completely synthetic data at low noise level Table2 a,b shows selected results on cluster accuracy and cluster stability on the completely synthetic datasets with four simulated repeated measurements. Table 2a,b show results from average linkage, complete linkage and centroid linkage hierarchical algorithms, k-means, MCLUST-HC (a hierarchical model-based clustering algorithm from MCLUST) and IMM.... In PAGE 6: ... Completely synthetic data at low noise level Table 2a,b shows selected results on cluster accuracy and cluster stability on the completely synthetic datasets with four simulated repeated measurements. Table2 a,b show results from average linkage, complete linkage and centroid linkage hierarchical algorithms, k-means, MCLUST-HC (a hierarchical model-based clustering algorithm from MCLUST) and IMM. Both single linkage and DIANA produce very low-quality and unstable clusters and their adjusted Rand indices are not shown.... ..."

### Table 1 Clustering at the first level

2000

"... In PAGE 11: ... The optimal cluster number is achieved automatically using the introduced method. The clustering results at the first level are shown in Table1 , based upon the calculation of cluster concentration levels, level decrease measures, and relative decrease measures considering different cluster numbers using Eqs. (8) and (9).... In PAGE 11: ... (8) and (9). From Table1 , the optimal cluster number with the minimum value of relative decrease measure is iden- tified as 3. So, the delivery tasks are classified into three clusters at the first level, namely C1, C2, and C3.... ..."

### Table 2. Accuracy (NMI) scores for base and ensemble clustering methods.

"... In PAGE 9: ... We suggest that the applicable of a diago- nal dominance reduction technique, which limits the influence of self-similarity, contributes to this improvement. In addition, the results in Table2 show that in several cases correspondence clustering after prototype reduction (COR-RED) performed better than clustering on the full kernel matrix. We suggest that the use of neighbourhood centroids as prototypes allows the production of a robust partition that may not be easily obtained by clustering on the full dataset using... In PAGE 10: ...able 2. Accuracy (NMI) scores for base and ensemble clustering methods. ing phase allows this partition to be refined to produce a more accurate final solution. Both KM and AA exhibited considerable instability due to the sensitivity of these algorithms to the choice of initial clusters, which is reflected in the high deviation scores in Table2 . In contrast, the ensemble methods tend to be far more robust, frequently producing identical or highly similar partitions.... ..."

### Table 1. New cluster distances.

567

"... In PAGE 1: ...ean proper motion and parallax values, i.e. when the eld H59 of the Hipparcos Catalogue was equal to G, O, V or X (see ESA 1997). The number of stars selected in each cluster is given in Table1 . It varies between 6 and 24.... In PAGE 2: ... From these mean parallaxes , and associated standard errors, the distance and distance moduli are also indicated, together with a 1 variation. In the right part of Table1 , the dis- tance moduli, colour excesses, and ages, quoted by Lyng a (1987), and metallicities from Lyng a (1987), Piatti et al. (1995) or Claria amp; Piatti (1996) are also indicated.... In PAGE 2: ... The higher part of the Figure 2 shows the superposition of the 5 cluster sequences in the (MV ; (B ? V )0) diagram. The lower part repro- duces the sequences of Praesepe and NGC 2516 with the error bars on absolute magnitudes derived from Hipparcos data (see Table1 ). The cluster sequences... ..."

### Table 1. New cluster distances

"... In PAGE 1: ...ean proper motion and parallax values, i.e. when the eld H59 of the Hipparcos Catalogue was equal to G, O, V or X (see ESA 1997). The number of stars selected in each cluster is given in Table1 . It varies between 6 and 24.... In PAGE 2: ... From these mean parallaxes , and associated standard errors, the distance and distance moduli are also indicated, together with a 1 variation. In the right part of Table1 , the dis- tance moduli, colour excesses, and ages, quoted by Lyng a (1987), and metallicities from Lyng a (1987), Piatti et al. (1995) or Claria amp; Piatti (1996) are also indicated.... In PAGE 3: ... The higher part of the Figure 2 shows the superposition of the 5 cluster sequences in the (MV ; (B ? V )0) diagram. The lower part reproduces the sequences of Praesepe and NGC 2516 with the error bars on absolute magnitudes derived from Hip- parcos data (see Table1 ). The cluster sequences sep- arate into two groups: Praesepe and NGC 6475 se- quences are about 0.... ..."

### Table 5. Performances of the ensembles generated by the design method based on classi er clustering. Di erent diversity measures were used as evaluation functions to guide the search. The evaluation function used is indicated within brackets. The sizes of the selected ensembles are reported.

"... In PAGE 21: ... Table5 shows results obtained by our method based on classi er clus- tering (Section 4.4).... ..."

### Table 3: The new memberships after clusters !2 and !4 are selected.

"... In PAGE 13: ...o the ones used during the irregular hyper-tetrahedron transformation (Section 3.3). Consider the membership matrix in Table 2. For example, if we want to visualize only a subset of clusters, !2 and !4, the membership values (in bold) in Table 2 are flrst extracted, obtaining Table3 . Table 4 is obtained after the normalization is applied to Table 3.... ..."

### Table 1: Clustering ensemble and consensus solution

"... In PAGE 7: ... Correspondence problem is emphasized by different label systems used by the partitions. Table1 shows the expected values of latent variables after 6 iterations of the EM algorithm and the resulting consensus clustering. In fact, stable combination appears even after the third iteration, and it corresponds to the true underlying structure of the data.... In PAGE 10: ... Figure 3 shows the error as a function of k for different consensus functions for the galaxy data. It is also interesting to note that, as expected, the average error of consensus clustering was lower than average error of the k-means clusterings in the ensemble ( Table1 ) when k is chosen to be equal to the true number of clusters. Moreover, the clustering error obtained by EM and MCLA algorithms with k=4 for Biochemistry data was the same as found by the advanced supervised classifiers applied to this dataset [28].... ..."

### Table 1: Clustering ensemble and consensus solution

2005

"... In PAGE 10: ... Correspondence problem is emphasized by different label systems used by the partitions. Table1 shows the expected values of latent variables after 6 iterations of the EM algorithm and the resulting consensus clustering. In fact, a stable combination appears as early as the third iteration, and it corresponds to the true underlying structure of the data.... ..."

Cited by 8