### Table 5. Comparison study of subspace SDP with preference and subspace extended eigenvalue algorithm.

### Table 4: Comparison with model-based clustering using the EM algorithm.

1997

Cited by 17

### Table 1. Clustering matrix for 6 cities with EM-algorithm

### TABLE II Appropriate numbers of clusters obtained by the original and the extended RPCL algorithms

### Table 3: Comparison of three clustering algorithms: the hybrid approach (Hybrid), HAC, and EM with random initialization averaged over 5 runs (EM5)

2005

"... In PAGE 5: ... Because EM5 and HAC require the number of clusters k as an input parame- ter, for both of them we have used the number of clusters k detected by the corresponding instance of the hybrid system. Table3 lists the results for the three algorithms and the ve test collections. The hybrid approach clearly outper- forms the other two algorithms in all the test collections.... ..."

Cited by 6

### Table 2: Clustering results for different clustering methods. Clustering accuracy is used for dataset L5 and M5 as the evaluation metric, and normalized mutual information is used for Pendigit and Ribosome . Soft Cut Normalized Cut (Kmeans) Normalized Cut (EM)

2005

"... In PAGE 6: ... 4.2 Experiment (I): Effectiveness of The Soft Cut Algorithm The clustering results of both the soft cut algorithm and the normalized cut algorithm are summarized in Table2 . In addition to the Kmeans algorithm, we also apply the EM clus- tering algorithm to the normalized cut algorithm.... In PAGE 7: ... This can be explained by the fact that the Kmeans algorithm uses binary cluster membership and therefore is likely to be trapped by local optimums. As indicated in Table2 , if we replace the Kmeans algoirthm with the EM algorithm in the normalized cut algorithm, the variance in clustering results is generally reduced but at the price of degradation in the performance of clustering. Based on the above observation, we conclude that the soft cut algorithm appears to be effective and robust for multi-class clustering.... In PAGE 7: ... This result indicates that the choice of numbers of eigenvectors can have a significant impact on the performance of clustering. Second, comparing the results in Table 3 to the results in Table2 , we see that the soft cut algorithm is still able to outperform the normalized cut algorithm even with the optimal number of eigenvectors. In general, since spectral clustering is originally designed for binary-class classification, it requires an extra step when it is extended to multi-class clustering problems.... ..."

Cited by 5

### Table 1. Clusters in the three datasets are identified by Expectation Maximization (EM).

"... In PAGE 4: ... We let the algorithm to determine the optimal number of clusters. The descriptions of three datasets are summarized in Table1 . The size of a dataset means the number of citing records.... In PAGE 4: ...ecords. The network size is the number of article nodes in a corresponding co-citation network. In this article, all networks refer to merged Pathfinder networks. Table1 . Three datasets tested.... In PAGE 5: ....1.1. Social Network Analysis (1992-2004) Details of identified clusters are summarized in Table1 . Clusters are numbered as 0, 1, 2, and so on.... ..."

### Table 1. Clusters in the three datasets are identified by Expectation Maximization (EM).

"... In PAGE 3: ... We let the algorithm to determine the optimal number of clusters. The descriptions of three datasets are summarized in Table1 . The size of a dataset means the number of citing records.... In PAGE 3: ...ecords. The network size is the number of article nodes in a corresponding co-citation network. In this article, all networks refer to merged Pathfinder networks. Table1 . Three datasets tested.... In PAGE 4: ....1.1. Social Network Analysis (1992-2004) Details of identified clusters are summarized in Table1 . Clusters are numbered as 0, 1, 2, and so on.... ..."

### Table 8: Clustering error rate of EM algorithm as a function of the number of missing labels for the large datasets

2005

"... In PAGE 26: ... The number of missing labels in each partition was varied between 10% to 50% of the total number of patterns. The main results averaged over 10 independent runs are reported in Table8 for Galaxy and Biochemistry datasets for various values of H and k. Also, a typical dependence of error on the number of patterns with missing data is shown for Iris data on Figure 6 (H=5, k=3).... ..."

Cited by 8

### Table 1: An EM algorithm for MoG-based ICA

"... In PAGE 5: ... The EM algorithm for the mixture model in (7) is essentially a simple clustering algorithm with a complexity that grows linearly with respect to the number of sources. It can be implemented in exactly the same manner as the full MoG model ( Table1 ) with a restricted set of allowable indices. However, given the simplicity of (7), we are also able to make a further algorithmic improvement that speeds up convergence using an extension of EM called alternating expectation conditional maximisation (AECM) [22].... ..."