### Table 2: Worst-case complexity measures

"... In PAGE 26: ... The question mark in the row of ray classi cation indicates that it was not possible to analyze preprocessing time and memory requirement separately since the ray classi cation data structure is built in a \lazy eval- uation quot; fashion during the tracing phase instead of a separate preprocessing phase. Table2 surveys the worst-case complexity of these algorithms in order to allow comparison. It illustrates that the heuristic algorithms are much better in the average case than in the worst case.... ..."

### Table 5: System tolerant to worst-case failure

"... In PAGE 5: ...Page 5 (Do NOT include page numbers!) D7 improvement with addition of larger number of redundant components. Table5 illustrates the case of designing systems with better worst-case performance where schedule latency increase is allowed. The area savings are lower than in Tables 3 and 4, but the rules of thumb for semi-optimal implementations clearly work since nearly equal reliability is obtained compared to highly redundant fixed logic implementation.... ..."

### Table 1: Worst-case and average times, in ms

"... In PAGE 8: ... The lines cross as we near 100% of time and the slowest transactions; the concurrent collectors gen- erally have smaller worst-cast times than the throughput collec- tors. Table1 shows the worse-case transaction times. The Pauseless algorithm apos;s worse-case transaction time of 26ms is over 45 times better than the next JVM, BEA apos;s parallel collector.... In PAGE 10: ... As expected, the parallel collectors all do worse with the bulk of time spent in pauses ranging from 150ms to several seconds. Table1 also shows the ratio of worst-case transaction time and worst-case reported pause times. Note that JBB transactions are highly regular, doing a fixed amount of work per transaction.... ..."

### Table 2: Worst-case complexities to establish AC.

"... In PAGE 3: ... Proposition 2 The worst-case time complexity of AC3rm is: O(ed2 + P C;X;a c(C;X;a) s(C;X;a)). Table2 indicates the overall worst-case complexities1 to establish arc consistency with algorithms AC3, AC3rm, AC2001 and AC3.2.... In PAGE 4: ..., 2004]. By taking into account Proposition 3 and Table2 , we ob- tain the results given in Table 4. It appears that, for the longest branch, when gt; d2, MAC3 and MAC3rm have a better worst-case time complexity than other MAC algo- rithms based on optimal AC algorithms since we know that, for any branch, due to incrementality, MAC3 and MAC3rm are O(ed3).... ..."

### Table 10: Impact of on worst-case return

2007

"... In PAGE 23: ... As we can see from the gure, the rate of growth of the objective value is moderate, and appears to slow down for larger . 0 2 4 6 8 10 12 14 16 18 20 0 5 10 15 20 Figure 1: Value of robust optimization problem as a function of parameter estimation error In Table10 we describe a di erent type of experiments involving . For each data set in the table, we rst solve the robust optimization problem with = 0, save its solution, and then compute the worst-case behavior of that vector for positive values of .... ..."

### Table 1: Worst-case comparison

"... In PAGE 10: ....1.4 Comparison of Expansion Approaches We compared the number of unifications which take place using each of these methods for various num- bers of disjunctions (all disjunctions having two dis- juncts). One can see from Table1 that the worst-case score for the full expansion method is far worse than the other methods. It is not a practical method.... ..."

### Table 1 Worst-case Complexity of Some Or-parallel Schemes (m operations)

"... In PAGE 23: ... The upper bound, ~ O( 3 pN), even if far from the lower bound, is of great importance, as it suggests that current implementation mechanisms are more complex than necessary, and better implementation schemes are feasible. Furthermore, the scheme proposed achieves a worst-case time complexity which is better than all the or-parallel schemes reported in the literature (some of which have also been implemented); Table1 compares the worst case time complexity of performing a sequence of M operations, on an N node tree, for some of the most well known schemes for or-parallelism 11). The scheme we present handles the OP without the use of the alias operation.... ..."

### Table 4. Worst-case complexities to run MAC. Time complexity is given for a branch involving refutations.

2006

"... In PAGE 22: ... All the problems in the data set consist of 50 activities while the number of precedence constraints varies. Table4 shows the speci ca- tion of problems used in our experiment and the best solutions obtained. Note that the solutions obtained by our approach (Precedence) are optimal.... In PAGE 23: ...17 Table4 . Min-cutset problems.... In PAGE 46: ... Unluckily, the optimality of the algo- rithm, given in the paper, does not hold (Likitvivatanavong, Personal Communication). By taking into account Proposition 5 and Table 2, we obtain the results given in Table4 . It appears that, for the longest branch, when gt; d2, MAC3 and MAC3r(m) have a better worst-case time complexity than other MAC algorithms based on optimal AC algorithms since we know that, for any branch, due to incrementality, MAC3 and MAC3r(m) are O(ed3).... ..."

### Table 5: Performance guarantees in terms of worst-case factors from optimality for the critical-path heuristic (hcp) and the decision tree heuristic (hdt), for ranges of basic block sizes and various issue widths.

"... In PAGE 10: ...ic, there were between 2.9 and 7.8 basic blocks where the decision tree heuristic found a better schedule than the critical-path heuristic. Table5 shows performance guarantees in terms of the worst-case factor from optimality, a measure of the robustness of a heuristic. For each basic block, we calculated the ratio of the length of the schedule found by the heuristic over the length of the optimal schedule.... ..."

### Table 1 Theoretical worst-case I/O costs of the four algorithms under evaluation.

"... In PAGE 6: ... 2.3 Theoretical Comparisons In Table1 we summarize the theoretical worst-case I/O costs of the four algorithms under eval- uation, listed in the order of increasing I/O costs. We make the following theoretical comparisons of the algorithms.... In PAGE 12: ... When we run Distribution on data-long of 1:5 106 segments with various values of MM used (see Fig. 6), in theory we would expect that using more main memory results in a better performance according to the I/O cost bound O(NB logMB NB + KB ) (see Table1 ), but the experiments show that using 4Mb gives the best performance (average running time 47.64 minutes), and using 20Mb gives a signi cantly worse performance (average running time 271.... In PAGE 14: ... Figures 7{9 show the corresponding values of (a) average running times, (b) exact numbers of requested I/O apos;s, and (c) average numbers of page faults (rounded to the nearest integers), of the four algorithms running on the three data sets. We refer to Table1 for the theoretical properties of the four algorithms, and to Table 3 for the properties of the three data sets. Our experimental results show that while the performance of the three variations of plane sweep depends heavily on the average number of vertical overlaps, the performance of distribution sweep is both steady and e cient.... In PAGE 14: ... 7{9), but as input size grows, the performance becomes considerably worse, and up to N = 106 its running times are already out of comparison. Recall from Table1 the algorithmic di erence between 234-Tree-Core and 234-Tree. This shows that internal sorting assuming an in nite-size virtual memory performs much worse than external sorting when I/O becomes an issue.... In PAGE 14: ... 7). Excluding 234-Tree-Core, the performance of the remaining algorithms is the opposite to our expectation from Table1 : 234-Tree always runs the fastest, B-Tree the second, and Distribution the third. This is because the average number of vertical overlaps is only 1 4 pN (see Table 3), and thus up to 2.... In PAGE 18: ... Therefore, actual experiments running on real machines to measure the actual running times are necessary when we want to evaluate the practical I/O performance of the algorithms. For data set data-long, the I/O issue begins to play an important role (recall from Table 3 that the average number of vertical overlaps is 1 8N), and the performance of the four algo- rithms is consistent with their theoretical properties from Table1 : Distribution the fastest, B-Tree the second, 234-Tree the third, followed by 234-Tree-Core (see Fig. 8).... In PAGE 18: ... For data set data-rec, the average number of vertical overlaps is 1 4:8N (see Table 3), and thus the I/O issue becomes even more important. The performance of the four algorithms is again consistent with their theoretical properties from Table1 , and their running times di er signi cantly (see Fig. 9).... In PAGE 18: ... 7(b), 8(b), and 9(b)). Recall that the I/O cost bounds for Distribution and B-Tree are O(NB logMB NB + KB ) and O(N logB NB + KB ), respectively (see Table1 ). With the parameter MM used of the main memory size set to 4Mb, the two logarithmic terms in these bounds are almost the same, and it is the 1 B term that makes the di erence signi cant.... ..."