### Table 1: An outline of the maximal subgraph mining algorithm

2004

"... In PAGE 3: ... This tree structure follows the recursive procedure we present in Table 2 which can be used to explore the search space for a given graph. Before we proceed to details about mining maximal frequent subgraphs, we outline the enumeration scheme discussed so far in Table1 and Table 2. Our strategy is quite straightforward: we first find all frequent trees; trees are expanded to cyclic graphs by searching their search spaces; and maximal frequent subgraphs are constructed from frequent ones.... In PAGE 5: ... Interested readers might verify that themselves. Table 3 and Table 4 integrate these optimizations into the basic enumerate technique we presented in Table1 and Table 2. Algorithm MaxSubgraph-Expansion(T) begin 1.... ..."

Cited by 21

### Table 1: An outline of the maximal subgraph mining algorithm

2004

"... In PAGE 3: ... This tree structure follows the recursive procedure we present in Table 2. Before we proceed to details about mining maximal frequent subgraphs, we outline the enumeration scheme discussed so far in Table1 and Table 2. Our strategy is quite straightforward: we first find all frequent trees; trees are expanded to cyclic graphs by searching their search spaces; and maximal frequent subgraphs are constructed from frequent ones.... In PAGE 5: ... Interested readers might verify that themselves. Table 3 and Table 4 integrate these optimizations into the basic enumerate technique we presented in Table1 and Table 2. Algorithm MaxSubgraph-Expansion(T) begin 1.... ..."

Cited by 21

### Table 1: Runtime on three types of circuit struc- tures and sub-graphs of i10.

"... In PAGE 4: ... This ratio grows almost linearly, indicating that the complexity of the sparse LP linear programming algorithm is quadratic to the size of circuit. Table1 depicts the runtime of the two algorithm on sev- eral circuit structures: k-nary trees, the two DAGs in Fig- ure 2 and 4-input subnetworks from i10.blif.... In PAGE 5: ... DAG0 and DAG1 have 2 fanouts. From Table1 , in all structures, the DP algorithm is at least 16 times faster than LP, and with more unknown nodes to be placed, this speedup is even more reflecting a quadratic complexity of LP. Surprisingly, about half of the random samples of DAG0 and DAG1 and 99.... ..."

### Table 1. Performance Guarantees for nding spanning trees in an arbitrary graph on n nodes. Asterisks indicate results obtained in this paper. gt; 0 is a xed accuracy parameter. The diagonal entries in the table follow as a corollary of a general result (Theorem 12.8) which is proved using a parametric search algorithm. The entry for (Degree, Degree, Spanning tree) follows by combining Theorem 12.8 with the O(log n)-approximation algorithm for the degree problem in [RM+93]. In [RM+93] they actually provide an O(logn)-approximation algorithm for the weighted degree problem. (The weighted degree of a subgraph is de ned as the maximum over all nodes of

"... In PAGE 3: ... Given the framework, it remains to reason and ll in the appropriate polynomial time subroutine that is applicable for the corresponding pair of objectives. Table1 contains the performance guarantees of our approximation algorithms for nding span- ning trees, S, under di erent pairs of minimization objectives, A and B. For each problem cataloged in the table, two di erent costs are speci ed on the edges of the undirected graph: the rst objective is computed using the rst cost function and the second objective, using the second cost function.... In PAGE 3: ... For example the entry in row A, column B, denotes the performance guarantee for the problem of minimizing objective B with a budget on the objective A. All the results in Table1 extend to nding Steiner trees with at most a constant factor worsening in the performance ratios. All the results in the table extend to nding Steiner trees with at most a constant factor worsening in the performance ratios (Exercise!).... In PAGE 4: ... There he provides an approximation algorithm for the (Degree, Diameter, Spanning tree) problem with performance guarantee (O(log2 n); O(log n))1. The (Diameter, Total cost, Spanning tree) entry in Table1 corresponds to the diameter- constrained minimum spanning tree problem introduced earlier. It is known that this problem is NP-hard even in the special case where the two cost functions are identical [HL+89].... ..."

### Table 2: Algorithm for Subgraph Sampling

2001

"... In PAGE 8: ... Sampling entire subgraphs preserves the association between each core object and all the peripheral objects neces- sary for accurate calculation of attributes. Table2 lists a generic algorithm for subgraph sampling. The algorithm first assigns sub- graphs to prospective samples, and then incrementally converts prospective assignments to permanent assignments only if the subgraphs are separated from subgraphs already assigned to samples.... In PAGE 10: ...The algorithm for subgraph sampling ( Table2 ) depends on the predicate separate(Si,Sj) which indicates whether two subgraphs consist of disjoint sets of objects. We differenti- ate among three criteria for determining subgraph separation.... ..."

Cited by 1

### Table 6: Summary of search tree sizes for the problems considered Problem Trivial Best known result Our method

2004

"... In PAGE 30: ...Figure 6: Worst-case branching number depending on size of considered subgraphs 4.3 Summary Focusing on the worst-case branching numbers computed for various graph modi cation problems, we give an overview on our results in Table6 : We compare the worst-case branching numbers corresponding to a trivial branch- ing, the best so far known result, and the search tree algorithm size bound computed by our method. In Fig.... ..."

Cited by 9

### Table 6: Summary of search tree sizes for the problems considered Problem Trivial Best known result Our method

2004

"... In PAGE 30: ...Figure 6: Worst-case branching number depending on size of considered subgraphs 4.3 Summary Focusing on the worst-case branching numbers computed for various graph modi cation problems, we give an overview on our results in Table6 : We compare the worst-case branching numbers corresponding to a trivial branch- ing, the best so far known result, and the search tree algorithm size bound computed by our method. In Fig.... ..."

Cited by 9

### Table 2. Propagation Costs in GenSpace Subgraph.

2007

"... In PAGE 9: ... Comparison of running time between optimization and linear methods Running time (Sec) Cases # of variables # of constraints Optimization linear Case 1 365 19 67 lt;1 Case 2 1095 3 1984 lt;1 To show the efficiency of the SelectHiddenNodes algorithm, we present two cases: (1) mark all nodes in the lowest 5 levels (out of 19 levels) as uninteresting and (2) mark all the nodes in the lowest 5 levels or with specific date values or specific temperature values as uninteresting. The results are shown in Table2 . The Storage column lists the storage cost in thousands of records.... ..."

### Table 1. The number of subgraphs encountered by our algorithm with and without symmetry-breaking (including multiple encounters for the version without symmetry- breaking). The improvement factor is exactly the average number of automorphisms of subgraphs of the associated size.

"... In PAGE 4: ... We introduce a technique that avoids spending time finding a subgraph more than once due to its symmetries. This technique improves the speed of our method by a factor exponential in the size of the query subgraph ( Table1 ). Moreover, since each instance is discovered exactly once, our algorithm can write instances to disk as they are found, greatly improving memory usage.... ..."

### Tables I and III compare the execution time of the algorithm on the

1991