### Table I. Comparison of tightness of convex relaxations for original and reformulated problems

2006

Cited by 5

### Table 1. Comparisons of the relaxations for example 3. Relaxation

2002

"... In PAGE 14: ... For this example, only one cutting plane yields the same tightness of the relaxation as the convex hull. The numerical results are shown in Table1 . Note that the big-M relaxation yields the lowest objective value to the optimal solution, 4.... In PAGE 14: ... As shown in Figure 15, the cutting plane is a facet of the convex hull. From Table1 it can be seen that the big-M relaxation with a cutting plane yields a competitive relaxation compared with the convex hull. Cutting planes in x-y space: example 2 Let us revisit example 2.... In PAGE 26: ...List of Tables Table1 . Comparisons of the relaxations for example 3.... ..."

### Table 2.13.1. Use of relaxed block algorithm within sparse multifrontal QR factorization to enable more Level 3 BLAS operations. Times for factorization (in seconds) are obtained on eight processors of an Alliant FX/80 and on one processor of a Convex C220. We show in Table 2.13.1, the influence of the relaxation of the sparsity structure of the frontal matrices on the performance of the QR algorithm. We see that, with relaxation of the nonzero structure, we sometimes obtain a significant decrease in the time to perform the factorization step although slightly more floating-point operations are necessary because of our relaxation of the sparsity structure. This performance improvement only comes from the relative increase in the Megaflop rate during QR factorization due to the increase in the use of higher Level BLAS kernels. We also see that, although the unblocked version of the code is 60% faster on the Convex than on the Alliant (see column 2, matrices large2 and EXP), higher performance is obtained (see column 6) on the Alliant than on the Convex with the relaxed block algorithm. Much of this work is described in detail in [3] and a report on this work is currently in preparation [1].

### Table 3: Uniform Model with the convex quadratic valid inequalities (21). n is the number of customer segments, m is the number of products, and v is a label of the problem instance. The column \MIP quot; is the optimal objective value (4), the column \LP quot; is the optimal objective value of the LP relaxation of (4), and the column \With Cut quot; is the optimal objective value of the continuous relaxation of (4) with the convex quadratic inequality (21).

2007

"... In PAGE 29: ... We see that these inequalities are indeed cuts since the optimal solution of the LP violates them in most instances. There are four anomalies in Table3 , namely, the instances (n; m; v) with (10, 40, 5), (10, 60, 1), (10, 60, 4) and (10, 60, 5). For each of these instances, the objective value of the QCP relaxation is strictly less than the optimal objective value of the MIP.... ..."

Cited by 1

### Table 1: Synthetic experiments, comparing xed order learning methods (given the correct variable order), with training sample size 50. Loglikelihood loss. Data Set Convex BIC BDe

"... In PAGE 8: ... All algorithms were given the correct variable ordering in these synthetic experiments. Table1 shows the results obtained by the convex relaxation technique versus the greedy search algorithms on a training sample of size 50 drawn from the synthetic Bayesian net- work models. Here we can see that the convex technique outperforms the greedy heuristic search procedures, using both the BDe and BIC scores.... ..."

### Table 1: Running times for the continuous ASG algorithm and the discretized version The number of parameter functions that are discountinous at the same time was varied, however this was found to make relatively little di erence to the running time. 5 Concluding Remarks We have presented an active-set method which, under mild assumptions on the problem apos;s parameters, is capable of nding the exact solution to the continuous-time quadratic cost network ow problem e ciently. Although only a relatively simple example is included here for illustration purpose, the algorithm has been tested extensively on many other large and highly non-trivial problems, and has consistently return the same e ciency. Other extensions of the algorithm are possible, such as relaxing the strong convexity of the problem to a weakly convex one, or relaxing the network structure of the problem to look at a more general continuous-time monotropic programming problem (Rockafellar 1984). We are also conducting research into using this kind of model for water distribution networks and tra c ow problems.

"... In PAGE 17: ...2 Comparison with Discretization To show that the continuous ASG algorithm is more e ective in practice than just discretizing the problem, we solved a range of randomly generated problems using the two di erent approaches. The running times in CPU seconds on a DEC alpha workstation are summarized in Table1 . Problems are class ed according to the number of nodes, arcs and atomic intervals.... ..."

### Table 4: UCI data set experiments, comparing methods that learn both structure and order, with training sample size 50. Loglikelihood loss.

"... In PAGE 9: ... Interestingly, the solution quality is close to the xed order case, which only bene ted slightly from having the correct variable ordering. Table4 shows the results obtained by the convex relaxation technique versus the greedy search algorithms on the UCI data sets. Here the quality of the outcome is mixed.... ..."

### Table 3: Synthetic experiments, comparing methods that learn both structure and order, with training sample size 50. Loglikelihood loss.

"... In PAGE 9: ... However, other than not imposing a variable ordering, the algorithms were run exactly as described above for the xed order case. Table3 shows the results obtained by the convex relax- ation technique versus the greedy search algorithms on the synthetic problems. Here we see a modest advan- tage for the convex over greedy search methods.... ..."

### Table 1 is a comparison of bounds obtained from MSDR3 and other relaxation methods applied to instances from QAPLIB [6]. The first column OPT denotes the exact optimal value of the problem instance, while the following columns contain the lower bounds from the relaxation methods: GLB , the Gilmore-Lawler bound [10]; KCCEB , the dual linear programming bound [15]; P B , the projected eigenvalue bound [12]; QP B , the convex quadratic programming bound [1]; SDR1 , SDR2 , SDR3 , the vector-lifting semidefi- nite relaxation bounds [27] computed by the bundle method [24]; the last column is our MSDR3 . All output values are rounded up to the nearest integer. To solve QAP , the minimization of trace AXBXT and trace BXAXT are equivalent. But for the relaxation MSDR3 , exchanging the roles of A and B results in two different formulations and bounds. In our tests we use both versions and take the larger output as the bound of MSDR3 . We then keep the better formulation throughout the branch and bound process, so that we do not double the computational work.

2006

"... In PAGE 16: ... We then keep the better formulation throughout the branch and bound process, so that we do not double the computational work. From Table1 , we see that the relative performances between the LP -based bounds GLB , KCCEB are unpredictable. At some instances, both are weaker than even the least expensive P B bounds.... In PAGE 17: ...a. 4887 4965 4621 Nug30 6124 4539 4785 5266 5362 5413 5651 5803 5446 rou12 235528 202272 223543 200024 205461 208685 219018 223680 207445 rou15 354210 298548 323589 296705 303487 306833 320567 333287 303456 rou20 725522 599948 641425 597045 607362 615549 641577 663833 609102 scr12 31410 27858 29538 4727 8223 11117 23844 29321 18803 scr15 51140 44737 48547 10355 12401 17046 41881 48836 39399 scr20 110030 86766 94489 16113 23480 28535 82106 94998 50548 tai12a 224416 195918 220804 193124 199378 203595 215241 222784 202134 tai15a 388214 327501 351938 325019 330205 333437 349179 364761 331956 tai17a 491812 412722 441501 408910 415576 419619 440333 451317 418356 tai20a 703482 580674 616644 575831 584938 591994 617630 637300 587266 tai25a 1167256 962417 1005978 956657 981870 974004 908248 1041337 970788 tai30a 1818146 1504688 1565313 1500407 1517829 1529135 1573580 1652186 1521368 tho30 149936 90578 99855 119254 124286 125972 134368 136059 122778 Table1 : Comparison of bounds for QAPLIB instances... ..."

Cited by 2