### Table 1: Clustering Criterion Functions.

2002

"... In PAGE 3: ... For those partitional clustering algorithms, the clustering problem can be stated as computing a clustering solution such that the value of a particular criterion function is optimized. In this paper we use six different clustering criterion functions that are defined in Table1 and were recently compared and analyzed in a study presented in [45]. These functions optimize various aspects of intra-cluster similarity, inter-cluster dissimilarity, and their combinations, and represent some of the most widely-used criterion functions for document clustering.... ..."

Cited by 69

### Table 1: Clustering Criterion Functions.

2002

"... In PAGE 3: ... For those partitional clustering algorithms, the clustering problem can be stated as computing a clustering solution such that the value of a particular criterion function is optimized. In this paper we use six different clustering criterion functions that are defined in Table1 and were recently compared and analyzed in a study presented in [45]. These functions optimize various aspects of intra-cluster similarity, inter-cluster dissimilarity, and their combinations, and represent some of the most widely-used criterion functions for document clustering.... ..."

Cited by 69

### Table 2. Fly functional clusters

in The Computational Analysis of Scientific Literature to Define and Recognize Gene Expression Clusters

2003

"... In PAGE 4: ...eferences and a mean of 29.8 references. Defining cluster boundaries by maximizing NDPG weighted average selects 525 non-overlapping nodes as the final clusters. Many of the defined clusters correspond to well- defined biological functions such as photoreceptor genes, protein degradation, protein synthesis, muscle function, citric acid cycle and proton transport ( Table2 ). Some of these 4556... ..."

### Table 2: Functional Clustering Algorithms

"... In PAGE 35: ...5 Architectural features The main features of FOSART are summarized in Table 2. 14 Conclusions Comparison of clustering models, synthesized in Table2 , shows that SOM, GNG and FOS- ART develop a soft competitive adaptation strategy, i.e.... ..."

### Table 2 Query-cluster similarity functions

2003

"... In PAGE 7: ... To calculate the query-cluster similarity, we define the distance between the query image and clusters. We use four methods to compute the distance, as shown in Table2 : distance to a... ..."

### TABLE IV THE FUNCTIONAL CLASSES OF THE CLUSTER # 20 Functional Class # of proteins participate

### Table 4 Cluster accuracy and stability on yeast galactose data

2003

"... In PAGE 8: ... It is interesting that the spherical model of the IMM approach produces unstable clusters at both high and low noise levels. Yeast galactose data Table4 a,b show selected results on cluster accuracy and cluster stability on real yeast galactose data. The true mean column in Table 4a refers to clustering the true mean data R34.... In PAGE 8: ... Yeast galactose data Table 4a,b show selected results on cluster accuracy and cluster stability on real yeast galactose data. The true mean column in Table4 a refers to clustering the true mean data R34.8 Genome Biology 2003, Volume 4, Issue 5, Article R34 Yeung et al.... In PAGE 9: ... The highest level of cluster accuracy (adjusted Rand index = 0.968 in Table4 a) was obtained with several algorithms: centroid linkage hierarchical algorithm with Euclidean dis- tance and averaging over the repeated measurements; hier- archical model-based algorithm (MCLUST-HC); complete linkage hierarchical algorithm with SD-weighted distance; and IMM with complete linkage. Clustering with repeated measurements produced more accurate clusters than clus- tering with the estimated true mean data in most cases.... In PAGE 9: ... Clustering with repeated measurements produced more accurate clusters than clus- tering with the estimated true mean data in most cases. Table4 b shows that different clustering approaches lead to different cluster stability with respect to remeasured data. Similar to the results from the completely synthetic data, Euclidean distance tends to produce more stable clusters than correlation (both variability-weighted and average over repeated measurements).... ..."

### Table 1: Number of clusters in some benchmarks functions.

"... In PAGE 4: ... We have implemented a classical algorithm for computing the transitive closure [6] and tried it on benchmark functions to see how many disjoint clusters we obtain for different values of k. Table1 shows the results. Columns 2 and 3 give the number of cubes in the covers for on- and off-set of the benchmark functions, correspondently, computed by the two- level AND-OR minimizer Espresso [7].... In PAGE 4: ... If clustering is added as a preprocessing step (with distance k = 0), then the ob- jects are equivalence classes of the on-set of the function, determined by the algorithm for computing the transitive closure. As shown in Table1 , clustering greatly reduces the number of objects to consider. In the next section we demonstrate that this leads to a considerable reduction in the run-time of AOXMIN-MV algorithm.... ..."