### Table 6: Graph partitions of random graphs generated by cutting the hypercube and grid em-

"... In PAGE 16: ... We #0Cnd that the bisection widths for hypercube embeddings are about the same for all hyperplanes whereas for grid embeddings, the two partitions dividing the grid in half vertically and horizontally give the best partitions. Table6 shows how the Mob hypercube and grid embedding algorithms perform as graph- partitioning algorithms. The data for random graphs on the performance of the Mob graph- partitioning algorithm and the KL graph-partitioning algorithm is taken from our study of local search graph-partitioning heuristics in #5B19,21#5D.... In PAGE 17: ...Table6 by the percentage of all edges that cross the cut between A and B.We found that 16-to-1 grid and hypercube embeddings with our Mob-based heuristics produced bisection widths comparable to those for the Mob heuristic for graph-partitioning.... In PAGE 17: ... The performance of the Mob embedding algorithms interpreted as graph-partitioning algorithms is remarkable, considering that Mob is optimizing the #5Cwrong quot; cost function. While the data in Table6 cannot show conclusively how good the Mob embedding algorithms are, the existence of a better graph-embedding algorithm would also imply the existence of a better graph-partitioning algorithm. 3.... ..."

### Table 1 Various matrices used in evaluating the multilevel graph partitioning and sparse matrix ordering algorithm.

1998

"... In PAGE 13: .... Experimental results|Graph partitioning. We evaluated the perfor- mance of the multilevel graph partitioning algorithm on a wide range of graphs arising in di erent application domains. The characteristics of these matrices are described in Table1 . All the experiments were performed on an SGI Challenge with 1.... In PAGE 18: ... The re nement policies that we evaluate are (a) KL(1), (b) KL, (c) BKL(1), (d) BKL, and (e) the combination of BKL and BKL(1) (BKL(*,1)). The result of these re nement policies for computing a 32-way partition of graphs corresponding to some of the matrices in Table1 is shown in Table 5. These partitions were produced by using the HEM during coarsening and the GGGP algorithm for initially partitioning the coarser graph.... In PAGE 20: ... Note that MSB is a signi cantly di erent scheme than the multilevel scheme that uses spectral bisection to partition the graph at the coarsest level. We used the MSB algorithm in the Chaco [25] graph partitioning package to produce partitions for some of the matrices in Table1 and compared the results with the partitions produced by our multilevel algorithm that uses HEM during coarsening phase, GGGP during partitioning phase, and BKL(*,1) during the uncoarsening phase. Figure 3 shows the relative performance of our multilevel algorithm compared with MSB.... In PAGE 26: ... Therefore, when the factorization is performed in parallel, the better utilization of the processors can cause the ratio of the runtime of parallel factorization algorithms ordered using MMD and that using MLND to be substantially higher than the ratio of their respective operation counts. The MMD algorithm is usually two to three times faster than MLND for ordering the matrices in Table1 . However, e orts to parallelize the MMD algorithm have had no success [14].... ..."

Cited by 495

### Table 4. Test Graphs.

"... In PAGE 18: ...ig. 2. Values of objective function for BCSPWR03 using subspace SDP with prefer- ence We also compared our new SDP method for graph partitioning with prefer- ences with the extended eigenvalue algorithm described in [27]. Other test case graphs that we used are summarized in Table4 . The comparison study in Table 5 shows that both methods are competitive when the number of constraints in- creases in our subspace SDP for graph partitioning with preferences.... ..."

### Table 2 The greedy graph partitioning algorithm Greedy-graph-partitioning

2002

"... In PAGE 14: ...1). The greedy graph partitioning algorithm used in our case is outlined in Table2 whereas its proof of the correctness is given in Appendix B.2.... In PAGE 15: ...Table2 ) a single node is added to the G1 partition, according to values of the selection criterion (SC) parameter which is evaluated by the formula given in Table 2. The node with the highest SC is selected, and added to partition G1.... In PAGE 15: ...according to values of the selection criterion (SC) parameter which is evaluated by the formula given in Table2 . The node with the highest SC is selected, and added to partition G1.... ..."

### Table 9 Characteristics of various graph partitioning algorithms.

1998

"... In PAGE 27: ... In the absence of extensive data, we could not have done any better anyway. In Table9 we show three di erent variations of spectral partitioning [45, 47, 26, 2], the multilevel partitioning described in this paper, the levelized nested dissection [11], the KL partition [31], the coordinate nested dissection (CND) [23], two variations of the inertial partition [38, 25], and two variants of geometric partitioning [37, 36, 15]. For each graph partitioning algorithm, Table 9 shows a number of characteris- tics.... In PAGE 27: ... In Table 9 we show three di erent variations of spectral partitioning [45, 47, 26, 2], the multilevel partitioning described in this paper, the levelized nested dissection [11], the KL partition [31], the coordinate nested dissection (CND) [23], two variations of the inertial partition [38, 25], and two variants of geometric partitioning [37, 36, 15]. For each graph partitioning algorithm, Table9 shows a number of characteris- tics. The rst column shows the number of trials that are often performed for each partitioning algorithm.... In PAGE 27: ... Others only require the set of vertices and edges connecting them. The third column of Table9 shows the relative quality of the partitions produced by the various schemes. Each additional circle corresponds to roughly a 10% im- provement in the edge-cut.... In PAGE 29: ... First, it captures global structure through the process of coarsening [28], and, second, it captures global structure during the initial graph partitioning by performing multiple trials. The sixth column of Table9 shows the relative time required by di erent graph partitioning schemes. CND, inertial, and geometric partitioning with one trial re- quire a relatively small amount of time.... In PAGE 29: ... On the other hand, multilevel graph partitioning requires a moderate amount of time and produces partitions of very high quality. The degree of parallelizability of di erent schemes di ers signi cantly and is de- picted by a number of triangles in the seventh column of Table9 . One triangle means that the scheme is largely sequential, two triangles means that the scheme can exploit a moderate amount of parallelism, and three triangles means that the scheme can be parallelized quite e ectively.... ..."

Cited by 495

### Table 9: Characteristics of various graph partitioning algorithms.

1998

"... In PAGE 21: ...ot available. For the sake of simplicity, we have chosen to represent each property in terms of a small discrete scale. In absence of extensive data, we could not have done any better anyway. In Table9 we show three different variations of spectral partitioning [47, 46, 26, 2], the multilevel partitioning described in this paper, the levelized nested dissection [11], the Kernighan-Lin partition [31], the coordinate nested... In PAGE 22: ...Table 9: Characteristics of various graph partitioning algorithms. For each graph partitioning algorithm, Table9 shows a number of characteristics. The first column shows the number of trials that are often performed for each partitioning algorithm.... In PAGE 22: ... Others only require the set of vertices and edges connecting them. The third column of Table9 shows the relative quality of the partitions produced by the various schemes. Each additional circle corresponds to roughly 10% improvement in the edge-cut.... In PAGE 23: ... First, it captures global structure through the process of coarsening [27], and second, it captures global structure during the initial graph partitioning by performing multiple trials. The sixth column of Table9 shows the relative time required by different graph partitioning schemes. CND, inertial, and geometric partitioning with one trial require relatively small amount of time.... In PAGE 23: ... On the other hand, multilevel graph partitioning requires moderate amount of time, and produces partitions of very high quality. The degree of parallelizability of different schemes differs significantly and is depicted by a number of triangles in the seventh column of Table9 . One triangle means that the scheme is largely sequential, two triangles means that the scheme can exploit a moderate amount of parallelism, and three triangles means that the scheme can be parallelized quite effectively.... ..."

Cited by 495

### Table 9: Characteristics of various graph partitioning algorithms.

1998

"... In PAGE 21: ...ot available. For the sake of simplicity, we have chosen to represent each property in terms of a small discrete scale. In absence of extensive data, we could not have done any better anyway. In Table9 we show three different variations of spectral partitioning [47, 46, 26, 2], the multilevel partitioning described in this paper, the levelized nested dissection [11], the Kernighan-Lin partition [31], the coordinate nested... In PAGE 22: ...Table 9: Characteristics of various graph partitioning algorithms. For each graph partitioning algorithm, Table9 shows a number of characteristics. The first column shows the number of trials that are often performed for each partitioning algorithm.... In PAGE 22: ... Others only require the set of vertices and edges connecting them. The third column of Table9 shows the relative quality of the partitions produced by the various schemes. Each additional circle corresponds to roughly 10% improvement in the edge-cut.... In PAGE 23: ... First, it captures global structure through the process of coarsening [27], and second, it captures global structure during the initial graph partitioning by performing multiple trials. The sixth column of Table9 shows the relative time required by different graph partitioning schemes. CND, inertial, and geometric partitioning with one trial require relatively small amount of time.... In PAGE 23: ... On the other hand, multilevel graph partitioning requires moderate amount of time, and produces partitions of very high quality. The degree of parallelizability of different schemes differs significantly and is depicted by a number of triangles in the seventh column of Table9 . One triangle means that the scheme is largely sequential, two triangles means that the scheme can exploit a moderate amount of parallelism, and three triangles means that the scheme can be parallelized quite effectively.... ..."

Cited by 495

### Table 9: Characteristics of various graph partitioning algorithms.

1998

"... In PAGE 21: ...vailable. For the sake of simplicity, we have chosen to represent each property in terms of a small discrete scale. In absence of extensive data, we could not have done any better anyway. In Table9 we show three different variations of spectral partitioning [47, 46, 26, 2], the multilevel partitioning de- scribed in this paper, the levelized nested dissection [11], the Kernighan-Linpartition [31], the coordinatenested dissec-... In PAGE 22: ...Table 9: Characteristics of various graph partitioning algorithms. For each graph partitioning algorithm, Table9 shows a number of characteristics. The first column shows the num- ber of trials that are often performed for each partitioning algorithm.... In PAGE 22: ... Others only require the set of vertices and edges connecting them. The third column of Table9 shows the relative quality of the partitions produced by the various schemes. Each additional circle corresponds to roughly 10% improvement in the edge-cut.... In PAGE 23: ... First, it captures global structure through the process of coarsening [27], and second, it captures global structure during the initial graph partitioning by performing multiple trials. The sixth column of Table9 shows the relative time required by different graph partitioning schemes. CND, inertial, and geometricpartitioningwith one trial require relativelysmall amountof time.... In PAGE 23: ... On the other hand, multilevel graph partitioning requires moderate amount of time, and produces partitions of very high quality. The degree of parallelizability of different schemes differs significantly and is depicted by a number of triangles in the seventh column of Table9 . One triangle means that the scheme is largely sequential, two triangles means that the scheme can exploit a moderate amount of parallelism, and three triangles means that the scheme can be parallelized quite effectively.... ..."

Cited by 495

### Table 4, the triple n28m; n; kn29 indicates that the homomorphism from Z

"... In PAGE 9: ... DINNEEN n03 , GEOFFREY PRITCHARD, AND MARK C. WILSON Table4 . Data for Cayley graphs of semidirect products of cyclic groups.... ..."

### Table 1 summarizes the recursion scheme for the partition function. The next section will extend the recursion scheme to the computation of the base pair probability.

1997

"... In PAGE 40: ...Folding Algorithms 31 QB ij = e H(ij)=kT + j m 2 X k=i+1 u umax j 1 X l=k+m+1 QB kl e [I(i;j;k;l)]=kT + j m 2 X k=i+1 QM i+1;k 1QM1 k;j 1 e MC=kT QM1 ij = j X l=i+m+1 QB il e [MI+MB(j l)]=kT QM ij = j m 1 X k=i+m+1 QM i;k 1 QM1 kj + j m 1 X k=i QM1 kj e MB(k i)=kT QA ij = j X l=i+m+1 QB il Qij = 1 + QA ij + j m 1 X k=i+1 Qi;k 1QA kj Table1 : Recursion for the calculation of the partition function: Calligraphic symbols denote energy parameters for di erent loop types: hairpin loops H(ij), interior loops, bulges, and stacks I(i; j; k; l); the multi-loop energy is modeled by the linear ansatz M = MC + MI degree + MB unpaired, e.... ..."