### Table 1: zChaff on pebbling formulas. z denotes out of

2003

"... In PAGE 7: ... There exist CNF formulas on which clause learning using the FirstNewCut scheme and no restarts pro- vides exponentially smaller proofs than regular resolution. 6 Experimental Results Table1 reports the performance of variants of zChaff on grid pebbling formulas. We conducted experiments on a 1600 MHz Linux machine with memory limit set to 512MB.... In PAGE 7: ..., 2003], this branching sequence can be efficiently generated by a combi- nation of breadth-first and depth-first traversals of the original pebbling graph even for more general classes of pebbling for- mulas. As shown in Table1 , one needs both clause learning as well as a good branching sequence to efficiently solve large problem instances. Of course, pebbling graphs, which correspond to problems involving precedence of tasks, represent a narrow domain of applicability.... ..."

Cited by 11

### Table 1: zChaff on pebbling formulas. z denotes out of

2003

"... In PAGE 7: ... There exist CNF formulas on which clause learning using the FirstNewCut scheme and no restarts pro- vides exponentially smaller proofs than regular resolution. 6 Experimental Results Table1 reports the performance of variants of zChaff on grid pebbling formulas. We conducted experiments on a 1600 MHz Linux machine with memory limit set to 512MB.... In PAGE 7: ..., 2003], this branching sequence can be efficiently generated by a combi- nation of breadth-first and depth-first traversals of the original pebbling graph even for more general classes of pebbling for- mulas. As shown in Table1 , one needs both clause learning as well as a good branching sequence to efficiently solve large problem instances. Of course, pebbling graphs, which correspond to problems involving precedence of tasks, represent a narrow domain of applicability.... ..."

Cited by 11

### TABLE II GRAPH DIAMETER FOR N = 106

in Graph-Theoretic Analysis of Structured Peer-to-Peer Systems: Routing Distances and Fault Resilience

2003

Cited by 68

### Table 9 Characteristics of various graph partitioning algorithms.

1998

"... In PAGE 27: ... In the absence of extensive data, we could not have done any better anyway. In Table9 we show three di erent variations of spectral partitioning [45, 47, 26, 2], the multilevel partitioning described in this paper, the levelized nested dissection [11], the KL partition [31], the coordinate nested dissection (CND) [23], two variations of the inertial partition [38, 25], and two variants of geometric partitioning [37, 36, 15]. For each graph partitioning algorithm, Table 9 shows a number of characteris- tics.... In PAGE 27: ... In Table 9 we show three di erent variations of spectral partitioning [45, 47, 26, 2], the multilevel partitioning described in this paper, the levelized nested dissection [11], the KL partition [31], the coordinate nested dissection (CND) [23], two variations of the inertial partition [38, 25], and two variants of geometric partitioning [37, 36, 15]. For each graph partitioning algorithm, Table9 shows a number of characteris- tics. The rst column shows the number of trials that are often performed for each partitioning algorithm.... In PAGE 27: ... Others only require the set of vertices and edges connecting them. The third column of Table9 shows the relative quality of the partitions produced by the various schemes. Each additional circle corresponds to roughly a 10% im- provement in the edge-cut.... In PAGE 29: ... First, it captures global structure through the process of coarsening [28], and, second, it captures global structure during the initial graph partitioning by performing multiple trials. The sixth column of Table9 shows the relative time required by di erent graph partitioning schemes. CND, inertial, and geometric partitioning with one trial re- quire a relatively small amount of time.... In PAGE 29: ... On the other hand, multilevel graph partitioning requires a moderate amount of time and produces partitions of very high quality. The degree of parallelizability of di erent schemes di ers signi cantly and is de- picted by a number of triangles in the seventh column of Table9 . One triangle means that the scheme is largely sequential, two triangles means that the scheme can exploit a moderate amount of parallelism, and three triangles means that the scheme can be parallelized quite e ectively.... ..."

Cited by 495

### Table 2: Timing (ms) comparison between two graph building algorithms.

"... In PAGE 14: ...Table 2: Timing (ms) comparison between two graph building algorithms. Table2 gives the timing comparison of two algorithms. Th timing is measured by compiling the programs on a SUN sparc (SUN4m) workstation with 96 MB memory and running Sun OS 4.... In PAGE 14: ...rograms on a SUN sparc (SUN4m) workstation with 96 MB memory and running Sun OS 4.1.3. From Table2 , we can see that the ancestor-sets based algorithm is much faster than the marking based one. On average, the ancestor-sets based algorithm is 43% faster than the marking based algorithm.... ..."

### Table 9: Characteristics of various graph partitioning algorithms.

1998

"... In PAGE 21: ...ot available. For the sake of simplicity, we have chosen to represent each property in terms of a small discrete scale. In absence of extensive data, we could not have done any better anyway. In Table9 we show three different variations of spectral partitioning [47, 46, 26, 2], the multilevel partitioning described in this paper, the levelized nested dissection [11], the Kernighan-Lin partition [31], the coordinate nested... In PAGE 22: ...Table 9: Characteristics of various graph partitioning algorithms. For each graph partitioning algorithm, Table9 shows a number of characteristics. The first column shows the number of trials that are often performed for each partitioning algorithm.... In PAGE 22: ... Others only require the set of vertices and edges connecting them. The third column of Table9 shows the relative quality of the partitions produced by the various schemes. Each additional circle corresponds to roughly 10% improvement in the edge-cut.... In PAGE 23: ... First, it captures global structure through the process of coarsening [27], and second, it captures global structure during the initial graph partitioning by performing multiple trials. The sixth column of Table9 shows the relative time required by different graph partitioning schemes. CND, inertial, and geometric partitioning with one trial require relatively small amount of time.... In PAGE 23: ... On the other hand, multilevel graph partitioning requires moderate amount of time, and produces partitions of very high quality. The degree of parallelizability of different schemes differs significantly and is depicted by a number of triangles in the seventh column of Table9 . One triangle means that the scheme is largely sequential, two triangles means that the scheme can exploit a moderate amount of parallelism, and three triangles means that the scheme can be parallelized quite effectively.... ..."

Cited by 495

### Table 9: Characteristics of various graph partitioning algorithms.

1998

"... In PAGE 21: ...vailable. For the sake of simplicity, we have chosen to represent each property in terms of a small discrete scale. In absence of extensive data, we could not have done any better anyway. In Table9 we show three different variations of spectral partitioning [47, 46, 26, 2], the multilevel partitioning de- scribed in this paper, the levelized nested dissection [11], the Kernighan-Linpartition [31], the coordinatenested dissec-... In PAGE 22: ...Table 9: Characteristics of various graph partitioning algorithms. For each graph partitioning algorithm, Table9 shows a number of characteristics. The first column shows the num- ber of trials that are often performed for each partitioning algorithm.... In PAGE 22: ... Others only require the set of vertices and edges connecting them. The third column of Table9 shows the relative quality of the partitions produced by the various schemes. Each additional circle corresponds to roughly 10% improvement in the edge-cut.... In PAGE 23: ... First, it captures global structure through the process of coarsening [27], and second, it captures global structure during the initial graph partitioning by performing multiple trials. The sixth column of Table9 shows the relative time required by different graph partitioning schemes. CND, inertial, and geometricpartitioningwith one trial require relativelysmall amountof time.... In PAGE 23: ... On the other hand, multilevel graph partitioning requires moderate amount of time, and produces partitions of very high quality. The degree of parallelizability of different schemes differs significantly and is depicted by a number of triangles in the seventh column of Table9 . One triangle means that the scheme is largely sequential, two triangles means that the scheme can exploit a moderate amount of parallelism, and three triangles means that the scheme can be parallelized quite effectively.... ..."

Cited by 495

### Table 9: Characteristics of various graph partitioning algorithms.

1998

"... In PAGE 21: ...ot available. For the sake of simplicity, we have chosen to represent each property in terms of a small discrete scale. In absence of extensive data, we could not have done any better anyway. In Table9 we show three different variations of spectral partitioning [47, 46, 26, 2], the multilevel partitioning described in this paper, the levelized nested dissection [11], the Kernighan-Lin partition [31], the coordinate nested... In PAGE 22: ...Table 9: Characteristics of various graph partitioning algorithms. For each graph partitioning algorithm, Table9 shows a number of characteristics. The first column shows the number of trials that are often performed for each partitioning algorithm.... In PAGE 22: ... Others only require the set of vertices and edges connecting them. The third column of Table9 shows the relative quality of the partitions produced by the various schemes. Each additional circle corresponds to roughly 10% improvement in the edge-cut.... In PAGE 23: ... First, it captures global structure through the process of coarsening [27], and second, it captures global structure during the initial graph partitioning by performing multiple trials. The sixth column of Table9 shows the relative time required by different graph partitioning schemes. CND, inertial, and geometric partitioning with one trial require relatively small amount of time.... In PAGE 23: ... On the other hand, multilevel graph partitioning requires moderate amount of time, and produces partitions of very high quality. The degree of parallelizability of different schemes differs significantly and is depicted by a number of triangles in the seventh column of Table9 . One triangle means that the scheme is largely sequential, two triangles means that the scheme can exploit a moderate amount of parallelism, and three triangles means that the scheme can be parallelized quite effectively.... ..."

Cited by 495

### Table 1: Comparison of diameters

"... In PAGE 11: ... All processors have four links available for interconnection, except the two processors connected to the system controller which have only three links available. Table1 shows the diameters of the AMPBest, AMPWorst and AMPDFS con gurations, along with those of the AMP con gurations generated by the GA (labelled AMPGA).... ..."