### Table 1. Average reward realized for the 10-armed bandit problem during the last 1000 trials

### Table 1. Lower and Upper Bounds

"... In PAGE 5: ... The box bound and axis bound cnn are not really generalized by the cnn problem, because with respect to this problem their server is constrained; however, since they somehow constitute simpler problems, a dotted line is shown in the figure. Table1 reports the known lower and upper bounds on the compet- itive ratio of the problems, together with the algorithm used to prove the upper bound. 4 Open problems As mentioned above, recently the cnn problem was proved to admit a com- petitive algorithm [17].... ..."

### Table 1 a Conservative confidence bounds on volume for the synthetic phantom Confidence threshold Lower bound (%) Upper bound (%) Width %

"... In PAGE 8: ... (15) V 99% is exaggerated, and the width of the confidence ground bounds in this case is unreasonably high. The values in The additional smoothness of surface representations is Table1 should be compared to the width of the confidence bound to reduce the uncertainty area, when compared to bounds using the simplex mesh (36.70%) and tGB patches the one based on voxels.... In PAGE 8: ...1. Several confidence thresholds were used, and distribution for the volume, from which bounds and other the results are shown in Table1 . Note that these confi- useful information can be derived.... ..."

### Table 1. The true gradient of the expected return and its MC and BQ estimates for two versions of the simple bandit problem corresponding to two di erent reward functions.

"... In PAGE 7: ... As a re- sult the probability of a path is also Gaussian with the same mean and variance: Pr( ) = (ajx) N(0; 1). The score function of the path = a and the Fisher information matrix G are computed as follows: r log Pr( ) = a a2 1 ; G = 1 0 0 2 Table1 shows the exact gradient of the expected re- turn and its MC and BQ estimates (using 10 and 100 samples) for two versions of the simple bandit prob- lem corresponding to two di erent reward functions r(a) = a and r(a) = a2. The average over 104 runs of the MC and BQ estimates and their standard devia- tions are reported in Table 1.... In PAGE 7: ... The score function of the path = a and the Fisher information matrix G are computed as follows: r log Pr( ) = a a2 1 ; G = 1 0 0 2 Table 1 shows the exact gradient of the expected re- turn and its MC and BQ estimates (using 10 and 100 samples) for two versions of the simple bandit prob- lem corresponding to two di erent reward functions r(a) = a and r(a) = a2. The average over 104 runs of the MC and BQ estimates and their standard devia- tions are reported in Table1 . The gradient is analyt- ically computable in this problem and is reported as \Exact quot; in Table 1 for comparison purposes.... In PAGE 7: ... The average over 104 runs of the MC and BQ estimates and their standard devia- tions are reported in Table 1. The gradient is analyt- ically computable in this problem and is reported as \Exact quot; in Table1 for comparison purposes. As shown in Table 1, the BQ estimate has much lower standard deviation than the MC estimate for both small and large sample sizes.... In PAGE 7: ... The gradient is analyt- ically computable in this problem and is reported as \Exact quot; in Table 1 for comparison purposes. As shown in Table1 , the BQ estimate has much lower standard deviation than the MC estimate for both small and large sample sizes.... ..."

### Table 1. The true gradient of the expected return and its MC and BQ estimates for two versions of the simple bandit problem corresponding to two di erent reward functions.

"... In PAGE 7: ... As a re- sult the probability of a path is also Gaussian with the same mean and variance: Pr( ) = (ajx) N(0; 1). The score function of the path = a and the Fisher information matrix G are computed as follows: r log Pr( ) = a a2 1 ; G = 1 0 0 2 Table1 shows the exact gradient of the expected re- turn and its MC and BQ estimates (using 10 and 100 samples) for two versions of the simple bandit prob- lem corresponding to two di erent reward functions r(a) = a and r(a) = a2. The average over 104 runs of the MC and BQ estimates and their standard devia- tions are reported in Table 1.... In PAGE 7: ... The score function of the path = a and the Fisher information matrix G are computed as follows: r log Pr( ) = a a2 1 ; G = 1 0 0 2 Table 1 shows the exact gradient of the expected re- turn and its MC and BQ estimates (using 10 and 100 samples) for two versions of the simple bandit prob- lem corresponding to two di erent reward functions r(a) = a and r(a) = a2. The average over 104 runs of the MC and BQ estimates and their standard devia- tions are reported in Table1 . The gradient is analyt- ically computable in this problem and is reported as \Exact quot; in Table 1 for comparison purposes.... In PAGE 7: ... The average over 104 runs of the MC and BQ estimates and their standard devia- tions are reported in Table 1. The gradient is analyt- ically computable in this problem and is reported as \Exact quot; in Table1 for comparison purposes. As shown in Table 1, the BQ estimate has much lower standard deviation than the MC estimate for both small and large sample sizes.... In PAGE 7: ... The gradient is analyt- ically computable in this problem and is reported as \Exact quot; in Table 1 for comparison purposes. As shown in Table1 , the BQ estimate has much lower standard deviation than the MC estimate for both small and large sample sizes.... ..."

### Table 1: The main algorithm used for our budgeted multi-armed bandit sampling problem

### Table 1: Best known upper bounds for basic graph theoretic problems.

"... In PAGE 3: ... I/O-e cient graph algorithms have been considered by a number of authors [5, 6, 13, 29, 23, 17, 27, 2, 1, 26, 20, 25, 11]. Table1 reviews the best known algorithms for basic graph theoretic problems on general undirected graphs. For directed graphs the best known algorithm for breadth- rst search (BFS) and depth- rst search (DFS) use O ?(V + E B ) log V B + sort(E) I/Os [11].... ..."

### Table 5: Comparison of upper bound obtained

"... In PAGE 36: ...36 Example 1 (continued) Example 1 is solved with the final proposed algorithm shown in Figure 10, as well as with the initial algorithm proposed in Figure 8, and the gap between the bounds ar e compared. Table5 shows that when sub - problem (PAI) is not included in the algorithm for an aggregation into 6 aggregate periods of length 4 each, we obtain an upper bound of $ 10.... ..."

### Table 3: Quality of upper bounds on the random problem sets.

"... In PAGE 8: ... While the results in this table may suggest that CRS-All is the clear winner on this benchmark set, a slightly different picture emerges when we also look at the quality of the solutions found by the various methods on problems that are not solved to optimal- ity. Table3 shows the sum of the upper bounds for each set of problems and for each algorithm, as well as the geometric mean of the ratio (abbreviated GMR) obtained by dividing the computed cost by the best known upper bound. All runs were limited to 20 minutes of CPU time.... ..."

### Table 1. Best known upper bounds for basic graph theoretic problems.

2000

"... In PAGE 3: ...I/O-e cient graph algorithms have been considered by a number of au- thors [1, 2, 5, 6, 10, 12,16, 19, 22, 24{26,29]. Table1 reviews the best known al- gorithms for basic graph theoretic problems on general undirected graphs. For directed graphs the best known algorithm for breadth- rst search (BFS) and depth- rst search (DFS) use O ?(V + scan(E)) log V B + sort(E) I/Os [10].... ..."

Cited by 13