### Table 1 Graph-Based Process Configuration Measures

2002

"... In PAGE 17: ...A sample of graph-based process configuration measures and values is presented in Table1 . The measurements listed under the Process A label are obtained from the example representation presented in Figure 4 above.... In PAGE 22: ... This latter inference represents the kind obtainable only through a ratio scale. A BCD A BC Process A Process C Figure 7 Processes as Standard Sequences To generalize, we can apply the extensive-measurement procedure to the other graph-based measures defined in Table1 as well. For instance, returning to the two directed graphs presented in Figure 7, say we measure a graph-based representation for Process A and obtain a measured value of four for process size.... In PAGE 23: ... As above, we note this is exactly the kind of analysis used to determine the measure mass, used extensively in the physical sciences, which supports a ratio scale. And again, this extensive-measurement approach can be applied to any of the graph-based measures defined in Table1 , in addition to other measures based on like graph-theoretic concepts (e.g.... ..."

### Table 1: Results for text summarization using Text- Rank sentence extraction. Graph-based ranking al-

"... In PAGE 3: ... We evaluate the summaries produced by TextRank using each of the three graph-based ranking algo- rithms described in Section 2. Table1 shows the re- sults obtained with each algorithm, when using graphs that are: (a) undirected, (b) directed forward, or (c) di- rected backward. For a comparative evaluation, Table 2 shows the re- sults obtained on this data set by the top 5 (out of 15) performing systems participating in the single docu- ment summarization task at DUC 2002 (DUC, 2002).... In PAGE 4: ... 5 Related Work Sentence extraction is considered to be an important first step for automatic text summarization. As a con- sequence, there is a large body of work on algorithms 5Notice that rows two and four in Table1 are in fact redundant, since the hub ( weakness ) variations of the HITS (Positional) algorithms can be derived from their authority ( power ) coun- terparts by reversing the edge orientation in the graphs. 6Only seven edges are incident with vertex 15, less than e.... ..."

### Table 1: Results for text summarization using Text- Rank sentence extraction. Graph-based ranking al-

"... In PAGE 3: ... We evaluate the summaries produced by TextRank using each of the three graph-based ranking algo- rithms described in Section 2. Table1 shows the re- sults obtained with each algorithm, when using graphs that are: (a) undirected, (b) directed forward, or (c) di- rected backward. For a comparative evaluation, Table 2 shows the re- sults obtained on this data set by the top 5 (out of 15) performing systems participating in the single docu- ment summarization task at DUC 2002 (DUC, 2002).... In PAGE 4: ... 5 Related Work Sentence extraction is considered to be an important first step for automatic text summarization. As a con- sequence, there is a large body of work on algorithms 5Notice that rows two and four in Table1 are in fact redundant, since the hub ( weakness ) variations of the HITS (Positional) algorithms can be derived from their authority ( power ) coun- terparts by reversing the edge orientation in the graphs. 6Only seven edges are incident with vertex 15, less than e.... ..."

### Table 1. Comparison over GraphBase directed graphs.

2005

"... In PAGE 4: ... The first set comes from [8]. The graphs are characterized by their probability (eta = 0:01 is noted r001 in Table1 ) that an edge is present between two distinct node n and n0. Those graphs were used to evaluate vflib algorithm performance [7].... In PAGE 4: ... Experiments show that CSP approach for subgraph matching solves more problem within a time limit against C++ specialized checking-based methods [7]. Table1 and 2 show the percentage of instances solved within a time limit of 5 minutes, for directed and undirected instances. Single specialized propagator MCPA for forbidden edges is more efficient than the version with two propagators.... ..."

Cited by 5

### Table 1. Graph-based modeling approaches.

2007

"... In PAGE 4: ... A transition links any two nodes (task or coordinator) and is represented by a directed arc. Table1 provides a list of representative graph-based modeling approaches. Table 1.... ..."

Cited by 1

### Table 1: Comparison over GraphBase undirected graphs. All solutions 5 min.

2004

"... In PAGE 9: ...6 Edges Nodes Assign golf222 464/2810 206/1020 150/946 golf322 1290/7629 548/2681 423/2559 steiner5 333/5898 157/2080 102/1977 steiner6 6215/65830 2226/22321 2035/21980 Table1 : Size of the graphs obtained for social golfers and Steiner triplets with/without pruning. Of course, care needs to be taken when inconsistent assignments have been used to build the graph.... In PAGE 17: ... 5.2 Dimacs graph coloring benchmarks Table1 shows the results of the methods on some graph col- oring benchmarks of Dimacs. It gives the number of nodes of the search tree and the CPU time for each method.... In PAGE 17: ...1c 84 - - - - 28044984 3096.0 (500-121275) Table1 : Dimacs graph coloring benchmarks succeed to solve 9 benchmarks among the 22 proposed. For space reason, we report here the results on the most rele- vant Dimacs problems to compare DSATUR and our method, but it is important to inform the reader that all the others DIMACS problems which are solved by DSATUR are also solved by SFC-weak-dom with a comparable performance.... In PAGE 23: ... We would therefore expect the search for lex-inspired early backtracks to be expensive, with not many useful domain deletions being returned. This ex- pectation is realised in our results given in Table1 . The better results for GAP-SBDD are in part because GAP- SBDD has special support for problems with Boolean variables.... In PAGE 24: ...21 Table1 : GAP-SBDD vs GAPLex. Problem class: BIBDs modelled as binary matrices.... In PAGE 31: ... Solv- ing Psym can still be longer than solving P because ltering thanks to the constraints of Crest also prunes the search tree. d dn T(n, m = d) 2 2n O( 2n pn) 3 3n O(3n n ) n nn O( pn (2 )n2 nn) Table1 : Comparison between the size of the search space of a CSP and the orbit size of a canonical solution in the case of variable symmetries. For instance, the line d = 3 shows that if the complexity of computing the canonical solutions of Psym is lower than O(3n n ) and the number of these canon- ical solutions is lower than n, then the overall complexity of solving P is reduced.... In PAGE 39: ...) are posted, and nally the full variable symmetry (FVS) that breaks all variable symmetries. Results are shown in Table1 and 2. In those runs, the prepro- cessing time has not been considered.... In PAGE 46: ... In our example, those are the val- ues 1; 2, and 4. Figure 2 (B) shows the data structure after another variable has been instantiated by adding (X4; 2) to UNBIASED BIASED 15 15 30 VPC AllDiff GCC AllDiff GCC AllDiff GCC 2 100 100 100 100 100 98-100 3 100 100 100 100 100 52-100 4 100 98-100 100 92 84-96 14-80 5 100 100 88 66 52-82 0-54 6 100 98-100 68 18-32 26-76 0-50 7 94 96-100 26 4-18 6-40 0-50 8 90 88-94 18 0-6 0-16 0-34 9 84 92 0 0-2 0 0-32 10 48 58 0 0 0 0-22 11 16 14 0 0 0 0-22 12 4 0 0 0 0 0 Table1 : Percentages of feasible solutions in the different benchmark sets for different numbers of values per constraint (VPC). We give ranges where even the best algorithm hit the time limit of 600 seconds.... In PAGE 47: ... The number of variables per constraint is xed at 12 while the number of values per constraint runs from 2 to 12, thus giving us a range of differently constrained instances. Table1 shows the percentage of feasible instances out of 50 randomly generated ones. In addition, we vary the constraint over all variables and values (GCC or AllDiffer- ent), and we select variables either uniformly or in a biased fashion, while values are always selected uniformly.... In PAGE 62: ... For this reason, symmetries are broken using SBDS in [15]. We present in Table1 . and Table 2.... In PAGE 63: ...60 Table1 : Results for computing all solutions for graceful graphs Graph SBDS DLC SOL BT sec. SOL BT sec.... ..."

### Table 1: Computational Effort for Landmark Graph-Based Registration

2003

"... In PAGE 6: ... 3 Summary of Computational Effort Each of the processing steps requires worst-case effort that has a polynomial bound. The effort described in Table1 assumes: NxM range images and F landmarks. D is the mean degree of landmark graphs and Q is the total number of edges.... In PAGE 9: ... This involves a post-processing step, where the graph V0 is grown in size after each new image is aligned. This is an O(F) operation, see Table1 . Growing V0 in an on-line fashion would permit extended regions of surface data to be incorporated into a single contiguous data set.... ..."

Cited by 2

### Table 1: Precision and recall for graph-based sequence data labeling, individual data labeling, and random baseline, for fine-grained and coarse-grained sense distinctions.

"... In PAGE 6: ...e.g. the most frequent sense provided by WordNet), and therefore they are both fully unsupervised. Table1 shows precision and recall figures4 for a 3Given a sequence of words, the original Lesk algorithm at- tempts to identify the combination of word senses that maxi- mizes the redundancy (overlap) across all corresponding defini- tions. The algorithm was later improved through a method for simulated annealing (Cowie et al.... ..."

### Table 1: Precision and recall for graph-based sequence data labeling, individual data labeling, and random baseline, for fine-grained and coarse-grained sense distinctions.

"... In PAGE 6: ...e.g. the most frequent sense provided by WordNet), and therefore they are both fully unsupervised. Table1 shows precision and recall figures4 for a 3Given a sequence of words, the original Lesk algorithm at- tempts to identify the combination of word senses that maxi- mizes the redundancy (overlap) across all corresponding defini- tions. The algorithm was later improved through a method for simulated annealing (Cowie et al.... ..."

### Table 4: Comparison over GraphBase undirected graphs for variable and value symmetries.

2004

"... In PAGE 39: ... For variable and value symmetries, a total of 233 undirected random in- stances were treated. We evaluated variable and values sym- metries separately and then together in Table4 . This table shows that, as expected, value symmetries and variable sym- metries each increase the number of solved instances.... In PAGE 48: ...5 The Impact of Restarts After seeing that symmetry breaking is bene cial even at the tremendous costs that SSB ltering incurs, we are curious to see how restarts affect the landscape. We show the perfor- mance of the restarted algorithms in Figure 5 on the bench- mark sets with 15 variables and in Table4 on the benchmark sets with 30 variables. The comparison of Figure 5 with Figure 4 shows that the algorithm that is unaware of symmetries can bene t greatl from restarts.... In PAGE 48: ... The fact that this was not visible in Figure 4 is due to the time limit (that we needed to impose to conduct our experiments within reason- able time) which arti cially decreases the variance of the slow algorithm NO. Table4 also shows very clearly that NR per- forms far better than NO. We get a similar picture when comparing the performance of SO and SR.... In PAGE 48: ... When we perform full SSB ltering, we see that restarts do not help on the benchmark with 15 variables and values. Only as we tackle large and very hard problems, FR starts to outperform FO as can be seen in Table4 when we consider instances with a global GCC. This leads to a suprising conclusion: Just breaking value symmetry in combination with restarts is in many cases com- petitive with full SSB! Compare FR and SR on global AllD- ifferent instances in Figure 5, or on global GCC instances in Table 4, for example.... In PAGE 48: ... Only as we tackle large and very hard problems, FR starts to outperform FO as can be seen in Table 4 when we consider instances with a global GCC. This leads to a suprising conclusion: Just breaking value symmetry in combination with restarts is in many cases com- petitive with full SSB! Compare FR and SR on global AllD- ifferent instances in Figure 5, or on global GCC instances in Table4 , for example. Of course, SSB is still the clear winner in the critical region on our large benchmark set with 30 vari- ables, but the good performance of restarted sibling- ltering is still astonishing.... In PAGE 50: ...5K - 42 112 - 90 90 - 349K 12 X - 0 X - 0 X - 0 X - 0 X - 0 X - 0 Table 3: Times per choice point in micro seconds and number of search nodes (averages over 50 instances per data point). AllDifferent GCC One-Shot Restarted One-Shot Restarted FO SO NO FR SR NR FO SO NO FR SR NR 2 100 100 98 100 100 100 46 62 86 78 100 100 3 100 100 90 100 100 100 36 36 78 46 100 100 4 86 98 76 76 92 100 41 40 55 41 88 47 5 60 72 40 56 56 88 30 30 24 28 54 24 6 52 23 4 54 18 51 14 10 2 12 8 10 7 61 41 2 65 20 9 14 4 2 10 12 6 8 80 84 0 82 52 0 4 0 0 14 0 0 9 96 100 0 98 76 0 24 2 0 34 0 0 10 100 100 36 100 100 0 26 10 0 76 2 82 11 100 100 96 100 100 100 52 50 48 70 100 100 12 100 100 100 100 100 100 100 100 100 100 100 100 Table4 : Histogramm for the benchmark sets with 30 variables and values, 12 variables per constraint, and biased variable selection. The rst column gives the number of values per constraint, the numbers in the table the percentage of instances... ..."