### Table 4 Results for the cardinality constrained e$cient frontier

1998

"... In PAGE 26: ...he results for our heuristic algorithms with K quot;10 and ei quot;0.01, di quot;1(i quot;1,...,N) are shown in Table4 . In that table we show, for each of our quot;ve data sets and each of our three heuristics: (a) the median percentage error, (b) the mean percentage error, (c) the number of (undominated) e$cient points, (d) the total computer time in seconds.... In PAGE 26: ... Note that for the column labelled lt; we did not eliminate from lt;(j) any dominated solutions. It can be seen from Table4 that over our quot;ve test data sets no one of our heuristic algorithms is uniformly dominant. Although the GA heuristic performs better than the SA heuristic, which in turn performs better than the TS heuristic, the di!erences are not nearly as marked as they were for the UEF (Table 3).... In PAGE 27: ...4. Discussion There are a number of points which can be made with respect to Table4 and these are discussed in this section. T.... In PAGE 28: ...6% 32.3% In Table4 we have shown results for quot;ve data sets, one particular value of K and one set of values for ei and di. Plainly for di!erent data sets/values the results will be di!erent.... In PAGE 28: ...or ei and di. Plainly for di!erent data sets/values the results will be di!erent. However we believe that our key point, namely that it is important to use a number of heuristics and to pool their results, is established. As stated above, the percentage errors given in Table4 are overestimates of the errors associated with each heuristic as they are derived from the UEF, which dominates the CCEF. One point that is important however is the distribution of these e$cient points along this frontier.... In PAGE 29: ... This is illustrated in Fig. 9 for the DAX data set (which has the highest mean (pooled) error in Table4 ) using the pooled results for all three heuristics. In that quot;gure we have plotted the curves for K quot;2, 3, 4 and 5 (ei quot;0.... ..."

### Table 8: DIMACS comparisons: cardinalities

1999

"... In PAGE 15: ... We also show (in parentheses under the name of the instance) the cardinality of the best solution known (followed by a star when optimal). In Table8 and Table 9 we compare our results with a selection of the best algorithms presented at DIMACS challenge 1993. We chose the algorithms of Goldberg and Rivenburgh [16] (denoted by GR), of Soriano and Gendrau [13] (SG), Balas and Niehaus [3] (BN).... In PAGE 15: ... Finally, in addition to this selection of algorithms from the DIMACS Challenge, we also considered the algorithm of Orlandani and Protasi [23] (OP).For each algorithm and each instance of MSS Problem, we present the best solution found ( Table8 ) and the running times (Table 9). Approximated speed ratios among the di erent computers used in the experiments are available for (GR), (SG) and (OP); the corresponding times reported in Table 9 are normalized.... ..."

Cited by 4

### TABLE 4: Performance of the SM5.42R/INDO/S2 Model by Solute Functional Class for Solutes Composed of H, C, N, O, F, S, Cl, Br, and I

1999

### Table 3: Run times (CPU seconds) for pathwise coordinate optimization applied to fused lasso (FLSA) problems with a large number of parameters n averaged over di erent values of the regularization parameters 1; 2.

2007

"... In PAGE 24: ...THE TWO-DIMENSIONAL FUSED LASSO 24 In Table3 we show the run times for pathwise coordinate optimization for larger values of n. As in the previous table, these are the averages of run times for the entire path of solutions for a given 1, and hence are conservative.... ..."

Cited by 2

### Table 2. LP results: max cardinality case.

2004

"... In PAGE 12: ...but when maximizing the total length a greater reduction seems to be obtained than for the max cardinality case. There are several cases (see, for example, problems Oct2 and Oct3 in Table2 , and Oct0 and Oct3 in Table 3) where a considerable improvement in the quality of the LP solu- tion is observed (number of fractional variables reduced by a factor of 20 for problem Oct0). Moreover, in one case (problem Oct2 in Table 3) the optimal integer solution is obtained by solving the LP formulation.... ..."

### Table 8. Complexity classes of all exact minimal ESOPs of 3 variables # ALLBF = 256

"... In PAGE 13: ... Evidently, in the set of such Boolean functions that require the largest number of cubes in their shortest ESOP, there is a subset of functions that belong to the set of functions with the largest number of cubes in its SNF. See for example Table8 . There are 66 functions of three variables that require three cubes in their shortest ESOP.... In PAGE 15: ... These Tables confirm that the maximal number of cubes in an exact minimal ESOP occur in most complex functions. However, Table8... ..."

### Table 8 A comparison for the bound?constrained problems. Problemn

1995

"... In PAGE 12: ...is made by the user); (0) = kg(0)k2 where = kg(0)k2 2=(g(0))TB(0)g(0) (the distance to the un- constrained Cauchy point, as suggested by Powell in [13]), except when the quadratic model is inde nite, in wich case we omitted the test. The detailed results are summarized in Tables 6 and 7 for the 64 unconstrained problems (possibly including some xed variables), and in Table8 for the 13 bound- constrained problems (see ITRR, LAN and CAU, respectively). For each case, the number of major iterations (\#its quot;) and the cpu times in seconds (\time quot;) are re- ported.... In PAGE 15: ... This proves the necessity of a sophisticated selection procedure for (0) i , that allows a swift recovery from a bad initial value for the ratio (0) i . We conclude this analysis by commenting on the negative results of Algorithm ITRR on problem TQUARTIC (see Table 7), when comparing with CAU, and on problem LINVERSE (see Table8 ), especially when comparing with LAN. For prob- lem TQUARTIC (a quartic), the ITRR computed by both LANCELOT and Algo- rithm ITRR is quite small and prevents from doing rapid progress to the solution.... ..."

Cited by 12

### Table 4. Standard box constrained benchmark problems to be minimized.

2005

"... In PAGE 23: ... The ZDT experiments clearly show that the selection based on the hypervolume leads to better results in terms of the measured indicators, because here s-MO-CMA signi cantly outperforms c-MO-CMA. On the common benchmark problems, Table4 , the NSGA-II is superior to the c- MO-CMA. The -indicator values are signi cantly better on ZDT1, ZDT4, and ZDT6, the hypervolume indicator values additionally on ZDT3.... ..."

### TABLE 6: Performance of the SM5.42R/INDO/S2 Model by Solvent Functional Class for Solutes Composed of H, C, N, O, F, S, Cl, Br, and I

1999

### Table 2: Constrained, Closed Queueing Network n i

1994

"... In PAGE 8: ... We also ran the optimization code starting from i = 1; the number of iterations was larger, as would be expected. We also experimented with a constrained version of the same problem, in which we minimized 400T ( )?1 subject to the three inequality constraints 1 15; 4 14; 4 Xi=1 i 50: The results are shown in Table2 , in which the quantities shown are the same as those shown in Table 1 except that z is the objective function value reduced by a di erent integer part (29 instead of 73). The starting point and the tolerance ACC for these computations were the same as for those of Table 1.... In PAGE 9: ... For example, in Table 1 the solutions produced using a total of 2 million and 80 million service completions are not very di erent (2nd and 7th lines of table). Similarly, in the rst and third lines of Table2 one sees that an increase from 100,000 to 9 million service completions produced relatively little change. This suggests that even a fairly small computational e ort may produce a solution accurate enough for practical purposes.... ..."

Cited by 10