### Table 2: Average KL divergence between topics for TOT vs. LDA on three datasets. TOT finds more distinct topics.

"... In PAGE 8: ... Distances between topics can also be measured nu- merically. Table2 shows the average distance of word dis- tributions between all pairs of topics, as measured by KL Divergence. In all three datasets, the TOT topics are more distinct from each other.... ..."

### Table 2 Cluster accuracy and stability on the completely synthetic data with four repeated measurements at low noise level

2003

"... In PAGE 6: ... The external knowledge is not used in computing cluster stability. Completely synthetic data at low noise level Table2 a,b shows selected results on cluster accuracy and cluster stability on the completely synthetic datasets with four simulated repeated measurements. Table 2a,b show results from average linkage, complete linkage and centroid linkage hierarchical algorithms, k-means, MCLUST-HC (a hierarchical model-based clustering algorithm from MCLUST) and IMM.... In PAGE 6: ... Completely synthetic data at low noise level Table 2a,b shows selected results on cluster accuracy and cluster stability on the completely synthetic datasets with four simulated repeated measurements. Table2 a,b show results from average linkage, complete linkage and centroid linkage hierarchical algorithms, k-means, MCLUST-HC (a hierarchical model-based clustering algorithm from MCLUST) and IMM. Both single linkage and DIANA produce very low-quality and unstable clusters and their adjusted Rand indices are not shown.... ..."

### Table 2: Average KL divergence between topics for TOT vs. LDA on three data sets. TOT finds more distinct topics.

"... In PAGE 8: ... Distances between topics can also be mea- sured numerically. Table2 shows the average distance of word distributions between all pairs of topics, as measured by KL Divergence. In all three data sets, the TOT topics are more distinct from each other.... ..."

### Table 4 Cluster accuracy and stability on yeast galactose data

2003

"... In PAGE 8: ... It is interesting that the spherical model of the IMM approach produces unstable clusters at both high and low noise levels. Yeast galactose data Table4 a,b show selected results on cluster accuracy and cluster stability on real yeast galactose data. The true mean column in Table 4a refers to clustering the true mean data R34.... In PAGE 8: ... Yeast galactose data Table 4a,b show selected results on cluster accuracy and cluster stability on real yeast galactose data. The true mean column in Table4 a refers to clustering the true mean data R34.8 Genome Biology 2003, Volume 4, Issue 5, Article R34 Yeung et al.... In PAGE 9: ... The highest level of cluster accuracy (adjusted Rand index = 0.968 in Table4 a) was obtained with several algorithms: centroid linkage hierarchical algorithm with Euclidean dis- tance and averaging over the repeated measurements; hier- archical model-based algorithm (MCLUST-HC); complete linkage hierarchical algorithm with SD-weighted distance; and IMM with complete linkage. Clustering with repeated measurements produced more accurate clusters than clus- tering with the estimated true mean data in most cases.... In PAGE 9: ... Clustering with repeated measurements produced more accurate clusters than clus- tering with the estimated true mean data in most cases. Table4 b shows that different clustering approaches lead to different cluster stability with respect to remeasured data. Similar to the results from the completely synthetic data, Euclidean distance tends to produce more stable clusters than correlation (both variability-weighted and average over repeated measurements).... ..."

### Table 2. Stabilization traffic (bytes per second) predicted by analytical model.

2005

"... In PAGE 12: ... Based on this analytical model, we computed the membership traffic for networks ranging from 10 to 10,000 nodes, where the replication factor is 2. These values are shown in Table2 . To validate our analyt- ical model, we compared the calculated stabilization traffic with the traffic we measured in our 16-node cluster (shown in Figure 8 of the previous section).... ..."

### Table 12. Validation of the instrument with the stability measurement Product Stability measurement

2003

"... In PAGE 24: ...ccasions, then the results are correlated. High positive correlation indicates good reliability. The data obtained from these evaluations were analyzed using the coefficient of correlation. A positive correlation was obtained in the two products ( Table12 ), which indicates that the Evaluator factor did not lead to different evaluations for the results of the product evaluation. Having ascertained the validity of the data obtained by the evaluation, each of the products was analyzed.... ..."

Cited by 7

### Table 1: The stability of the diametrical clustering algorithm is indi- cated by the low standard deviation of the total squared correlation coefficient for clusters produced by Phase I of the algorithm.

"... In PAGE 5: ... Stability: To measure the stability of the algorithm we computed the standard deviation of the total squared corre- lation coefficient (see ( 3)) over 20 runs. Table1 shows that the standard deviation of the squared correlation coefficient is small compared to its mean value and hence our algorithm is quite stable. The standard deviations of HAve and SAve (defined later) values (not shown here) are also small on all... ..."

### Table 8: Stability radii.

2000

"... In PAGE 23: ... This measures the distance from the given system to the nearest unstable system, and as we shall see, is not simply the stability margin. In Table8 , we order the systems from largest stability margin to lowest, and then include the stability radius in the last column. In general, stability radius decreases with stability margin.... ..."

Cited by 7

### Table 3. Stability measures (prec. 0.01) of the 5-partition, and their p-values (%).

"... In PAGE 6: ... These values indicate that the four clusters are stable: all the stability measures are high and assessed as being significant under H0 by low p-values. Each stability measure in Table3 was computed with a precision of at least 0.01, which required running the batch K-means method on N = 140 samples of the artificial data set.... In PAGE 6: ...resence of an outlier between these two clusters (see Fig.1). The partition into 4 clusters is also identified as optimal by the index CH, but the index KL suggests that k = 6 is the optimal number of clusters. Table3 presents the stability measures of the 5-partition. Note that with a p-value that is less than 4.... ..."

### Table 1 Confusion matrices for best clustering solutions, k=5. This shows a clear tendency for similar documents from the same topic to cluster together.

in Approved by

2005

"... In PAGE 21: ...he second. NMI ranges for ICA and LSA overlap, indicating no clear winner. As an additional validation measure, looking at confusion matrices and most frequent terms in each cluster can give a good sense of what a cluster is about . Table1 shows the confusion matrices for the best clusterings at k equal to 5 for both the WEB-JC and WEB-JV collections, and Table 2 shows the ten most frequent terms in each of those clusters. From these Tables, clear associations can be seen between the dominant topic in most clusters and the frequent terms for that cluster.... ..."