### Table 1: Algorithm: Semidefinite Embedding (SDE).

2005

Cited by 2

### Table 1. Comparison of results between grids with and without diagonals. New results

1994

"... In PAGE 2: ... For two-dimensional n n meshes without diagonals 1-1 problems have been studied for more than twenty years. The so far fastest solutions for 1-1 problems and for h-h problems with small h 9 are summarized in Table1 . In that table we also present our new results on grids with diagonals and compare them with those for grids without diagonals.... ..."

Cited by 11

### Table 12: Constraint Programming results.

1998

"... In PAGE 27: ... It can be seen that usually only a few seconds are needed by the Tabu code to find a solution a solution as good as that found by the LP heuristic. For purposes of comparison, in Table12 we reproduce the best results obtained by the Constraint Programming algorithm of Heipcke and Colombani ([16]) as reported in [6]. The columns LB and UB represent respectively the lower and upper bounds produced by the algorithm while column Cr.... ..."

Cited by 10

### Table I. Basic terms and notation for linear (LP), semidefinite (SDP), and conic programming. Term LP SDP Conic Notation

2005

Cited by 11

### Table 3: Bounds from Semidefinite Programming. (Intel Pentium III, 933 MHz). 30

2004

### Table 3: Bounds from Semidefinite Programming. (Intel Pentium III, 933 MHz). 30

2004

### Table 2. Comparative compression results (percentages)

1987

"... In PAGE 7: ... Welch,12 on the Ziv-Lempel compression algorithm. This variation is labelled LZW in Table2 . We actually tried out two versions of the LZW program.... In PAGE 8: ... For the results shown in Table 2, k was chosen to be 4. The version of DMC used in Table2 started with a braid- structured initial model and was not subjected to any memory- size constraints. Furthermore, we set the parameters that control the cloning of states in the Markov model to values that give good results.... In PAGE 8: ... We compared the five different compression programs on several data files. The resulting compression factors are shown in Table2 . A compression factor is computed as the ratio between the size of the encoded (compressed) file and the size of the original file.... In PAGE 9: ... Both the LZW and CW methods are, for practical purposes, byte-oriented. It is indeed possible to implement bit-oriented versions of LZW and CW but, in the case of LZW (as seen in Table2 ), the results are poor. This is because the learning period for LZW becomes much longer, too long for LZW to achieve reasonable compression on typical files.... In PAGE 9: ...bserved figures in the range of 2.2 to 2.6 bits. Of the compression algorithms compared in Table2 , the LZW algorithm is by far the fastest, while the CW method is the slowest. In terms of storage requirements, the adaptive Huffman algorithm uses the least amount of storage and the CW method normally uses the most.... ..."

Cited by 69

### Table 2: Changes of the Cameroon-B52 problem size Our next experiment demonstrates advantages of eliminating inactive constraints dur- ing the optimization process. Results collected in Table 2 monitor changes of the size of the real problem to be solved for one of the largest linear program from our collection. For subsequent iterations of the primal-dual algorithm it thus reports: number of LP constraints M, number of non-zero elements of still active part of A, number of non-zero elements in the adjacency structure AAT, number of o -diagonal elements of Cholesky factor L and number of millions of ops ( oating point operations) required to compute Cholesky decomposition of A AT matrix. This last value may be viewed as the approx-

1995

"... In PAGE 14: ...olving a Class of LP ... imate cost of a single primal-dual iteration (it is supposed that Cholesky decomposition takes 60-70% of the time of every iteration). It easily follows from the analysis of Table2 that primal-dual logarithmic barrier method is able to eliminate a remarkable fraction of LP constraints before reaching op- timum, which is not possible in a simplex method. Thus if the linear program has a considerable number of inequality constraints and a supposition exists that many of them will be inactive at the optimum, then it is advisable to apply logarithmic barrier method to solve it.... ..."

Cited by 4