### Table 1: Comparative performance results for different rule combinations. The abbreviations are: MP for max-product, SP for sum-product and C1, C2 are the first and second constraint-sets described in Section 5.

"... In PAGE 11: ...MP, the additional 54 constraints, defined on the previous section, by C1 and the additional set of nine permutation constraints by C2. Table1 shows the percentage of Sudoku puzzles that can be complectly solved using various combinations of the constraints. Table 1 also shows for the failure cases the average number of cells that were revealed until we arrived to a stopping-set.... In PAGE 11: ... Table 1 shows the percentage of Sudoku puzzles that can be complectly solved using various combinations of the constraints. Table1 also shows for the failure cases the average number of cells that were revealed until we arrived to a stopping-set. The last column presents the average processing time of solving a single Sudoku puzzle, measured in mili-seconds.... In PAGE 11: ... It can be seen that using the decision rules derived from the factor graph representation and the first rule-set described in the this section, the contribution of second rule-set is neglected. The second part of Table1 shows the performance results of Table 1: Comparative performance results for different rule combinations. The abbreviations are: MP for max-product, SP for sum-product and C1, C2 are the first and second constraint-sets described in Section 5.... In PAGE 11: ...the sum-product algorithm and the results of applying the sum-product algorithm on stopping-sets obtained from several combinations of decision rules. Table1 shows that the best strategy in term of performance (and also in term of computational complexity comparing with the sum-product) is applying the sum-product algorithm on the stopping-set obtained by the (extended version of) max-product. The short cycles that appear in the Sudoku factor-graph cause that the sum-product algorithm can quickly amplify non-correct assignments.... ..."

### Table 2. Performance comparison between graph cuts and topol- ogy cuts with the same initialization and parameter values. Image Size Graph Cuts Topology Cuts

in Topology Cuts: A Novel Min-Cut/Max-Flow Algorithm for Topology Preserving Segmentation in N-D Images

"... In PAGE 8: ... Our al- gorithm gives a result (Figure 8 (c)) that faithfully conforms to the initialization, which is more meaningful than that of the standard graph cuts algorithm (Figure 8 (b)). Table2 shows the comparison between the graph cuts implementation [5] and our topology cuts algorithm3. 8.... ..."

### Table 1. Comparison of graph cut and progressive cut in speed

"... In PAGE 6: ... Another notable strength of the new algorithm is that it provides a faster visual feedback. Since the eroded graph is generally much smaller than the graph on the whole image, the computational cost in the optimization process is greatly reduced, as demonstrated by Table1 in Sec.... In PAGE 7: ...n Sec2.3.5, we have mentioned that the our algorithm will save much time than the existing graph cut methods [1,2], due to the eroded graph. To demonstrate this strength, we compare the time cost of our algorithm with that of existing graph cut method in Table1 , using Indian girl , man , pity dog , sleepy dog , bride , and little girl as the test images. The running time is tested on a PC of P4-3.... ..."

### Table 1. Average denoising performance of various inference techniques and models on 10 test images

2006

"... In PAGE 10: ... We find that the model proposed here substantially outperforms the model from [4] using the suggested parameters, both visually and quantitatively. As de- tailed in Table1 , the PSNR of the learned model is better by more than 5dB. Figure 4 shows one of the 10 test images, in which we can see that the denoising results from the learned model show characteristic piecewise constant patches, whereas the results from the hand-defined model are overly smooth in many places.... In PAGE 12: ... Since this approximation is possible for both max-product and sum-product BP, we report results for both algorithms. Table1 compares both algorithms to a selection of pairwise MRFs (always with 50% update probability). We can see that the higher-order model outperforms the pairwise priors by about 0.... ..."

Cited by 4

### Table 1. Average denoising performance of various inference techniques and models on 10 test images

"... In PAGE 10: ... We find that the model proposed here substantially outperforms the model from [4] using the suggested parameters, both visually and quantitatively. As de- tailed in Table1 , the PSNR of the learned model is better by more than 5dB. Figure 4 shows one of the 10 test images, in which we can see that the denoising results from the learned model show characteristic piecewise constant patches, whereas the results from the hand-defined model are overly smooth in many places.... In PAGE 12: ... Since this approximation is possible for both max-product and sum-product BP, we report results for both algorithms. Table1 compares both algorithms to a selection of pairwise MRFs (always with 50% update probability). We can see that the higher-order model outperforms the pairwise priors by about 0.... ..."

### Table 1: max cut bounds, 3 speci c graphs

1995

"... In PAGE 4: ...87856 and also computes an upper bound which does not exceed the optimal value by more than a factor of 1 0:87856. In Table1 , we compare the performance of KL, SA, PO, and their algorithm, GW, on two ran- dom graphs with 500 vertices and edge probabilities 0.05 and 0.... In PAGE 4: ...f 0.05 and 0.5 respectively. Table1 contains the ra- tios c=u, where c is the value of the cut achieved by the corresponding algorithm, and u is the upper bound... ..."

Cited by 8

### Table 2: Graph cut timings for the flower garden sequence (all in minutes:seconds ). Note that in each case, the graph cut algorithm is iterated four times for convergence.

2001

"... In PAGE 24: ... Note that in each case, the graph cut algorithm is iterated four times for convergence. A table comparing the timings for hierarchical graph cuts with different extents of overloading is shown in Table2 . L-to-1 refers to a coarse level representing L original levels.... ..."

Cited by 64

### Table 6: Graph partitions of random graphs generated by cutting the hypercube and grid em-

"... In PAGE 16: ... We #0Cnd that the bisection widths for hypercube embeddings are about the same for all hyperplanes whereas for grid embeddings, the two partitions dividing the grid in half vertically and horizontally give the best partitions. Table6 shows how the Mob hypercube and grid embedding algorithms perform as graph- partitioning algorithms. The data for random graphs on the performance of the Mob graph- partitioning algorithm and the KL graph-partitioning algorithm is taken from our study of local search graph-partitioning heuristics in #5B19,21#5D.... In PAGE 17: ...Table6 by the percentage of all edges that cross the cut between A and B.We found that 16-to-1 grid and hypercube embeddings with our Mob-based heuristics produced bisection widths comparable to those for the Mob heuristic for graph-partitioning.... In PAGE 17: ... The performance of the Mob embedding algorithms interpreted as graph-partitioning algorithms is remarkable, considering that Mob is optimizing the #5Cwrong quot; cost function. While the data in Table6 cannot show conclusively how good the Mob embedding algorithms are, the existence of a better graph-embedding algorithm would also imply the existence of a better graph-partitioning algorithm. 3.... ..."

### Table 9: Characteristics of various graph partitioning algorithms.

1998

"... In PAGE 21: ...ot available. For the sake of simplicity, we have chosen to represent each property in terms of a small discrete scale. In absence of extensive data, we could not have done any better anyway. In Table9 we show three different variations of spectral partitioning [47, 46, 26, 2], the multilevel partitioning described in this paper, the levelized nested dissection [11], the Kernighan-Lin partition [31], the coordinate nested... In PAGE 22: ...Table 9: Characteristics of various graph partitioning algorithms. For each graph partitioning algorithm, Table9 shows a number of characteristics. The first column shows the number of trials that are often performed for each partitioning algorithm.... In PAGE 22: ... Others only require the set of vertices and edges connecting them. The third column of Table9 shows the relative quality of the partitions produced by the various schemes. Each additional circle corresponds to roughly 10% improvement in the edge-cut.... In PAGE 23: ... First, it captures global structure through the process of coarsening [27], and second, it captures global structure during the initial graph partitioning by performing multiple trials. The sixth column of Table9 shows the relative time required by different graph partitioning schemes. CND, inertial, and geometric partitioning with one trial require relatively small amount of time.... In PAGE 23: ... On the other hand, multilevel graph partitioning requires moderate amount of time, and produces partitions of very high quality. The degree of parallelizability of different schemes differs significantly and is depicted by a number of triangles in the seventh column of Table9 . One triangle means that the scheme is largely sequential, two triangles means that the scheme can exploit a moderate amount of parallelism, and three triangles means that the scheme can be parallelized quite effectively.... ..."

Cited by 495

### Table 9: Characteristics of various graph partitioning algorithms.

1998

"... In PAGE 21: ...vailable. For the sake of simplicity, we have chosen to represent each property in terms of a small discrete scale. In absence of extensive data, we could not have done any better anyway. In Table9 we show three different variations of spectral partitioning [47, 46, 26, 2], the multilevel partitioning de- scribed in this paper, the levelized nested dissection [11], the Kernighan-Linpartition [31], the coordinatenested dissec-... In PAGE 22: ...Table 9: Characteristics of various graph partitioning algorithms. For each graph partitioning algorithm, Table9 shows a number of characteristics. The first column shows the num- ber of trials that are often performed for each partitioning algorithm.... In PAGE 22: ... Others only require the set of vertices and edges connecting them. The third column of Table9 shows the relative quality of the partitions produced by the various schemes. Each additional circle corresponds to roughly 10% improvement in the edge-cut.... In PAGE 23: ... First, it captures global structure through the process of coarsening [27], and second, it captures global structure during the initial graph partitioning by performing multiple trials. The sixth column of Table9 shows the relative time required by different graph partitioning schemes. CND, inertial, and geometricpartitioningwith one trial require relativelysmall amountof time.... In PAGE 23: ... On the other hand, multilevel graph partitioning requires moderate amount of time, and produces partitions of very high quality. The degree of parallelizability of different schemes differs significantly and is depicted by a number of triangles in the seventh column of Table9 . One triangle means that the scheme is largely sequential, two triangles means that the scheme can exploit a moderate amount of parallelism, and three triangles means that the scheme can be parallelized quite effectively.... ..."

Cited by 495