### Table 1. Results for Feasibility Problems for a Given k (partitional clustering) and Unspecified k (hierarchical clustering)

2005

"... In PAGE 2: ... Our recent work [4] explored the computational complexity (difficulty) of the feasibility problem: Given a value of k, does there exist at least one clustering solution that satisfies all the constraints and has k clusters? Though it is easy to see that there is no feasible solution for the three cannot-link constraints CL(a,b),CL(b,c),CL(a,c) for k lt; 3, the general feasibility problem for cannot-link constraints is NP-complete by a reduction from the graph coloring problem. The complexity results of that work, shown in Table1 (2nd column), are important for data mining because when problems are shown to be in- tractable in the worst-case, we should avoid them or should not expect to find an exact solution efficiently. We begin this paper by exploring the feasibility of agglomerative hierarchical clus- tering under the above four mentioned instance and cluster-level constraints.... ..."

Cited by 6

### Table 6. Incorporating the Recency Weights (with a maximum weight of ten)

Cited by 1

### Table 11: Result for D1 When Maximum Weights Are Estimated.

"... In PAGE 26: ... Among 6,234 queries used in our experiments, 1,941 are single term queries. Table11 shows the experimental results for database D1 when the maximum normalized weights are not explicitly obtained. (Instead, it is assumed that for each term, the normalized weights of the term in the set of documents containing the term satisfy a normal distribution and therefore the maximum normalized weight is estimated to be the 99.... In PAGE 26: ...9 percentile based on its average weight and its standard deviation.) Comparing the results in Table 2 and those in Table11 , it is clear that the use of maximum normalized weights can indeed improve the estimation accuracy substantially. Nevertheless, even when estimated maximum normalized weights are used, the results based on the subrange-based approach are still much better than those based on the high-correlation assumption and those obtained by our previous method [18] .... ..."

### Table 11: Result for D1 When Maximum Weights Are Estimated.

"... In PAGE 9: ... Among 6,234 queries used in our experiments, 1,941 are single term queries. Table11 shows the experimental results for database D1 when the maximum normalized weights are not explicitly obtained. (Instead, it is assumed that for each term, the normalized weights of the term in the set of documents containing the term satisfy a normal distribution and the maximum normalized weight is estimated to be the 99.... In PAGE 9: ...9 percentile based on its average weight and its standard deviation.) Comparing the results in Table 2 and those in Table11 , it is clear that the use of maximum normalized weights can indeed improve the estimation accuracy substantially. Nevertheless, even when estimated maximum normalized weights are used, the results based on the subrange-based approach are still much better than those based on the high-correlation assumption and those obtained by our previous method [18] .... ..."

### Table XIX: Prediction result on SPX for the adjusted interaction approach (interaction cut off set as 0.8 for the result without maximum weighted graph matching)

### Table 3: For each row, G has the (k; ; )-partition on the left if and only if G or its connected components belong to the class on the right.

1998

"... In PAGE 12: ... The problems with cuto value 1 either have the property that a trivial partition with V1 = V and Vi = ;; i = 1; :::; k is a k-partition (see Fact 4) or they have = f0g and are easy by Fact 5. For the problems with cuto value 3 or 4, the properties that must be checked to decide whether a graph has a partition of size smaller than the cuto value are summerized in Table3 . We explain the rst two of these problems and their solutions further, whereas we leave the rest to be studied in the same way by the reader.... ..."

Cited by 17

### Table 5. The running time of QOMA (in seconds) using complete K-partite graph and spare graph for different value of d and W on the V1-R1 dataset

"... In PAGE 6: ... Performance evaluation: Our second experiment set evaluates the running time of QOMA. Table5 lists the running time of QOMA for the complete and the sparse K-partite graph algorithms for varying values of W. Experimental results show that QOMA runs faster for small W.... ..."

Cited by 1

### Table 7: Incorporating the recency weights (with a maximum weight = 10) Phrase Attributes Baseline Recency Final

### Table 4: Incorporating the recency weights (with a maximum weight of 10) Phrase Attributes Baseline Recency Constituent

### Table 1: k-partitioning of the unit square

1997

Cited by 8