### Table 2: Results on unsolved problems YN and SWV

"... In PAGE 7: ... The experiments with a stopping criterion according to (4) provided good approximations of the (previous) upper bounds within a relatively short run-time. Table2 shows our approximation within about 1% for the YN problems and 5% for the SWV benchmark problems, respectively. In the table, LB denotes the lower and UB the upper bounds taken from [6, 8, 20].... In PAGE 12: ... The algorithm returns simply to that temperature where the best solution so far has been found. The tempering strategy was used in runs that have been denoted in Table2 by the index t !1.... ..."

Cited by 1

### Table 4. Results on still unsolved problems YN and SWV

"... In PAGE 13: ... We have chosen the second cooling schedule to attack the larger 20 20 benchmark problems YN1 till YN4 and the 50 10 problems SWV11, SWV12, SWV13, and SWV15. Table4 displays that we improved and approached the known upper bounds (the values from the OR-Library, (? The results reported are the best ones obtained after 5 runs of the algorithm and the corresponding computation time is the average time over these ve runs.... ..."

### Table 1: Results in the ROB and GRP domain (previously unsolved problems)

"... In PAGE 20: ... The orderings used to tackle the target problems are equal to the orderings used to solve their related source problems. Table1 gives an overview on the results we obtained in the GRP and ROB domain. For each penalty function and the consistency values ef 2 f0:7; 0:8; 0:9g we depict the number of successful proof runs and the accumulated run-time (in seconds) counting failures with 600 seconds.... ..."

### Table 1: Results in the ROB and GRP domain (previously unsolved problems)

1998

"... In PAGE 20: ... The orderings used to tackle the target problems are equal to the orderings used to solve their related source problems. Table1 gives an overview on the results we obtained in the GRP and ROB domain. For each penalty function and the consistency values ef 2 f0:7; 0:8; 0:9g we depict the number of successful proof runs and the accumulated run-time (in seconds) counting failures with 600 seconds.... ..."

### Table 8: Improved objective values of the MIPLIB 2003 unsolved problems.

"... In PAGE 20: ...o far the best known solution for stp3d is 500.736. 7 Reducing the gap of all open problems Using the next generation of Xpress heuristics, based on local search procedures built on top of Xpress 2006B, improved solutions were found for all of the remaining (seven) open problems from MIPLIB 2003. Table8 provides these improvements along with the previously best know objectives, to our best knowledge. Before this study, the best know objectives of the open problems were mostly taken from the discussion list of the MIPLIB 2003 website [5].... In PAGE 20: ... Several recent papers ([2, 10, 11, 20, 27]) referencing MIPLIB 2003 were also considered to define the best objectives. From Table8 it can be seen that the new local search heuristics were capable of consid- erably improving the best know solutions. The most notable achievement occurred with the ds problem, which consists of a 58.... ..."

### Table 10: Success and unsolved problems in the automatic parallelization of the Perfect Bench-

"... In PAGE 28: ... The sole purpose of this section is to give some evidence that the automation of the hand transformations outlined in this paper is a feasible goal, and to point out problems that may be hard to solve. Table10 lists the Perfect Benchmarks and indicates, for each program, whether Polaris is able to recognize all the signi#0Ccant parallel loops in Tables 3 through 9. As shown in Table 10, Polaris can recognize all signi#0Ccant parallel loops in about half of the programs.... In PAGE 28: ... For details on these issues, the reader is referred to the description of the individual codes in Section 3. Table10 shows that signi#0Ccant progress in automatic parallelization is possible with the techniques described in this paper. Only two of the Perfect Benchmarks could be parallelized... ..."

### Table 4 All values areCPU-milliseconds; parentheses indicate unsolved problems.

"... In PAGE 12: ... As an example, consider again the hypothetical results of Tables 1 through 3. The inherent difficulty of Problem 5 (as measured by the cost of the eventual non-learning solution in Table4 ) exceeds the total difficulty of Problems 1 through 4, yet Problem 5 is excluded from the analyses. Thus evenan average CPU seconds per successful solution performance measure unfairly shows an increase in problem solving time for EBL.... ..."

### Table 5 All values areCPU-milliseconds; parentheses indicate unsolved problems.

"... In PAGE 8: ...The differing values in Table5 arise from the use of three different indexing algorithms. The data for EBL1 assume constant-time indexing, for EBL2 a logarithmic time (in the number of database entries) indexing scheme, and for EBL3 (our original problem solver) a linear-time indexing strategy.... ..."

### Table 2 All values areCPU-milliseconds; parentheses indicate unsolved problems.

"... In PAGE 7: ...Critical Look performance degradation for the EBL system. Table2 yields the same figures, while Table 3 yields 1.5 and 2.... ..."