### Table 5 describes the sizes of small, medium and large objects for main objects and small and medium object sizes for in-line objects. All requests are 350 bytes of size. Table 6 describes the selected types of workload.

2004

"... In PAGE 12: ... Table5... ..."

Cited by 1

### Table 9: The key to associate regression techniques with lines on the graphs in Figure 2. This table is laid out in blocks, similar to the graphs in the figure. Each block in the table lists the regression techniques associated with the lines of the corresponding graph for the case of small sample size.

1997

"... In PAGE 23: ... Again, to simplify the graph labels, the MISE has been multiplied by 10,000. Table9 provides a key for associating regression techniques with the lines on the graphs in Figure 2: the regression techniques are ordered in the table for each graph according to their position when sample size is small. For example, the graph lines in the middle graph of Figure 2, from better to worse MISE (i.... ..."

Cited by 1

### Table 10: The key to associate regression techniques with lines on the graphs in Figure 3. This table is laid out in blocks, similar to the graphs in the figure. Each block in the table lists the regression techniques associated with the lines of the corresponding graph for the case of small sample size.

1997

"... In PAGE 23: ... As before, all MISE values are multiplied by 10,000. Table10 provides a key for associating regression techniques with the lines on the graphs in Figure 3: the regression techniques are ordered in the table for each graph according to their position when sample size is small. For example, the graphs lines in the top left of Figure 3, from better to worse MISE (i.... ..."

Cited by 1

### Table 9: The key to associate regression techniques with lines on the graphs in Figure 2. This table is laid out in blocks, similar to the graphs in the figure. Each block in the table lists the regression techniques associated with the lines of the corresponding graph for the case of small sample size.

"... In PAGE 21: ... Again, to simplify the graph labels, the MISE has been multiplied by 10,000. Table9 provides a key for associating regression techniques with the lines on the graphs in Figure 2: the regression techniques are ordered in the table for each graph according to their position when sample size is small. For example, the graph lines in the middle graph of Figure 2, from better to worse MISE (i.... ..."

### Table 10: The key to associate regression techniques with lines on the graphs in Figure 3. This table is laid out in blocks, similar to the graphs in the figure. Each block in the table lists the regression techniques associated with the lines of the corresponding graph for the case of small sample size.

"... In PAGE 21: ... As before, all MISE values are multiplied by 10,000. Table10 provides a key for associating regression techniques with the lines on the graphs in Figure 3: the regression techniques are ordered in the table for each graph according to their position when sample size is small. For example, the graphs lines in the top left of Figure 3, from better to worse MISE (i.... ..."

### Table 5 summarizes changes in code size between reference implementation and revised code. These line counts include makefiles, input decks, and so on, but these are relatively small. Table 5: sPPM Lines of Code

2005

"... In PAGE 63: ...4 63.0 Table5 : Statistics for Number of File Includes Release Mean Std. Dev.... In PAGE 63: ... All of the metrics had stable summary statistics over all the releases, except for one metric. Table5 gives summary statistics for the metric Number of File Includes over all the versions stud- ied.... ..."

### TABLE 1: CASE-CLUSTER SAMPLE - ORGANIZATIONAL CHARACTERISTICS

### Table 1). The dotted lines give the average gains without the use of problem sizes, and the solid lines are for the gains with the regression. The graphs show that the use of sizes usually, though not always, leads to a small improvement. The apparent advantage of the regression in delay apos;s learning is mostly due to the choice of low time bounds for problems 9 and 10, which cannot be solved in feasible time. This luck in setting low bounds for two hard problems is not statistically signi cant. If the algorithm does not use problem sizes, it hits the time bounds of 16.9 and 14.0 on these problems (see Figure 5) and falls behind in its per-problem gain.

1997

"... In PAGE 3: ... The application of a method to a problem gives one of three outcomes: it may solve the problem; it may terminate with failure, after exhausting the available search space without nding a solution; or we may interrupt it, if it reaches some pre-set time bound without termination. In Table1 , we give the results of solving thirty transportation problems, by each of the three methods; we denote successes by s, failures by f, and hitting the time bound by b.... In PAGE 4: ...1 s 5.4 f 4 Table1 : Performance of apply, delay, and alpine on thirty transportation problems. Note that these data are only for illustrating the selection problem, and not for a general comparison of these search techniques; their relative performance may be very di erent in other domains.... In PAGE 4: ... Also note that the selection technique does not rely on speci c properties of prodigy; it is equally applicable to selection among multiple methods in any AI system. Although each method outperforms the others on at least one problem (see Table1 ), a glance at the data reveals that apply apos;s performance in this domain is probably the best among the three. We use statistical analysis to con rm this intuitive conclusion, and show how to choose a time bound for the selected method.... In PAGE 12: ...pply apos;s estimate of the maximal-gain bound, after solving all problems, is 9.6. It di ers from the 11.6 bound, found from Table1 , because the use of bounds that ensure a near-maximal gain has prevented su cient exploration. delay apos;s total gain is 115.... In PAGE 12: ...elay apos;s total gain is 115.7, or 3.9 per problem. If we used the data in Table1 to nd the optimal bound, which is 6.2, and solved all problems with this bound, we would earn 5.... In PAGE 12: ...3 per problem. The estimate based on Table1 gives the bound 11.0, which would result in earning 12.... In PAGE 14: ...0. In this experi- ment, we rst use the thirty problems from Table1 and then sixty additional transportation problems. The horizontal axis shows the number of a problem, and the vertical axis is the running time; we mark successes by circles and failures by pluses.... In PAGE 19: ... We denote the number of sample problems by n, the problem sizes by size1; :::; sizen, and the corresponding running times by time1; :::; timen. In Figure 12, we give the results of regressing the success times for the transportation problems from Table1 . The top three graphs show the polynomial dependency, whereas the bottom graphs are for the exponential dependency.... In PAGE 20: ... We also allow the user to set a regression slope, which is useful when the past data are not su cient for an accurate estimate. If the user speci es a slope, the system uses her value in the regression; however, it compares the user apos;s value with the regression estimate of Table1 1, determines the statistical signi cance of the di erence, and gives a warning if the user apos;s estimate is o with high probability. Note that the least-square regression and related t-test make quite strong assumptions... ..."

Cited by 2

### Table 1 summarizes some of our results or arrangements of up to 10; 000 lines. It can be seen that the distance from the original arrangement to the resulting one is very small even when the approximating arrangement is one tenth the size of the given arrangement. The exact distance depends on the type of distribution used to generate the lines. Moreover, as expected, as we allow more compression, the quality of the arrangement degrades, and the distance error grows.

2001

"... In PAGE 5: ... Table1 : Error Results Application: Computing the Discrepancy Supersampling is one of the most common approaches to the anti-aliasing problem in computer graphics. Since it has been shown that uniform sampling can lead to aliasing artifacts, a common approach has been to make use of the theory of discrepancy [39].... ..."

Cited by 4

### Table 2. Data cache miss rates for 64Bytes cache size, 8Bytes line size

"... In PAGE 8: ... In our simulations we assumed small cache sizes intended for application speci c inexpensive engines. For comparison reasons we have included some simulation gures for a very small cache (64Bytes) in Table2 , for a line size of 8Bytes and a mpeg stream containing only 34 frames, with 150 x 100 pixels image size. We could have arti cially create a bigger/smaller pointer application or in- crease/decrease the number of frames in the mpeg stream, or the frame size, but we feel con dent that the established pattern in performance readings for this application remains the same.... ..."