### Table 2.1: Reinforcement Learning for Spoken Dialogue Management (in a Nutshell). The rst group of references uses MDPs and the second group uses POMDPs.

2005

### Table 4: Human Genome Application

"... In PAGE 8: ... Similar to the Prolog-OBDC Interface, it is difficult to imagine a method for producing shorter more readable code than that above. Table4 provides the results for a combined DCS and DS method using a query of the form a(X),b(Y),c(X,Y). The speed-ups are normalized by the absolute performance of one processing node.... ..."

### Table 4: Human Genome Application

"... In PAGE 8: ... Similar to the Prolog-OBDC Interface, it is difficult to imagine a method for producing shorter more readable code than that above. Table4 provides the results for a combined DCS and DS method using a query of the form a(X),b(Y),c(X,Y). The speed-ups are normalized by the absolute performance of one processing node.... ..."

### Table 8. Theoretical speed-up of the Jacobi method, varying the number of processes and processors.

1999

"... In PAGE 12: ... Because the most costly part of the execution is the updating of the matrix, and nondiagonal blocks contain twice more elements to be nullified than diagonal blocks, the load of nondiagonal blocks can be considered twice the load of diagonal blocks. Table8 shows the theoretical speed-up of the method when logical meshes of 3, 6 or 10 processes are assigned to a network, varying the number of processors in the network from 2 to 10. Higher theoretical speed-up is obtained increasing the number of processes.... ..."

Cited by 1

### Table 1: Speed in Mega ops for 50 Iterations of the Iterative Tech- niques

1997

"... In PAGE 4: ... A test problem was taken, generated by dis- cretizing a three-dimensional elliptic partial di erential equation by the standard 7-point central di erence scheme over a three-dimensio- nal rectangular grid, with 100 unknowns in each direction (m = 100, n = 1; 000; 000). The observed computational speeds for several ma- chines (1 processor in each case) are given in Table1 , and we see that the actual speeds are often modest compared with the peak speeds. The reason for this is mainly that only few ops can be carried out per data that has to be moved from memory to, e.... ..."

Cited by 3

### Table 1: Speed in Mega ops for 50 Iterations of the Iterative Tech- niques

1997

"... In PAGE 4: ... A test problem was taken, generated by dis- cretizing a three-dimensional elliptic partial di erential equation by the standard 7-point central di erence scheme over a three-dimensio- nal rectangular grid, with 100 unknowns in each direction (m = 100, n = 1; 000; 000). The observed computational speeds for several ma- chines (1 processor in each case) are given in Table1 , and we see that the actual speeds are often modest compared with the peak speeds. The reason for this is mainly that only few ops can be carried out per data that has to be moved from memory to, e.... ..."

Cited by 3

### Table 2. The speed-ups obtained on an nCUBE-2 parallel computer with four DCS nodes. The speed-ups are normalized by the absolute performance of one processing node. Only numbers of the processing nodes are

"... In PAGE 4: ... Each quadrant represents a CPU combination: quadrant (1,1) is one computing CPU, quadrant (1,8) is 1 compute CPU and 8 I/O CPUs, quadrant (100, 1) is 100 compute CPUs and 1 I/O CPU and quadrant (100,8) is 100 compute CPUs and 8 I/O CPUs. Table2 provides the results for a combined DCS and DS method using a query of the form... ..."

### Table 2. The speed-ups obtained on an nCUBE-2 parallel computer with four DCS nodes. The speed-ups are normalized by the absolute performance of one processing node. Only numbers of the processing nodes are

"... In PAGE 4: ... Each quadrant represents a CPU combination: quadrant (1,1) is one computing CPU, quadrant (1,8) is 1 compute CPU and 8 I/O CPUs, quadrant (100, 1) is 100 compute CPUs and 1 I/O CPU and quadrant (100,8) is 100 compute CPUs and 8 I/O CPUs. Table2 provides the results for a combined DCS and DS method using a query of the form... ..."

### Table 1: Results for non-Markov decision problem

1996

"... In PAGE 7: ...7 This is observed in the tabled results. The average payo per trial for ve algorithms8 is shown Table1 (each row represents a separate run of 10,000 trials for each algorithm using a di erent seed for the pseudo-random number generator). 7For QL, Q-Trace and P-Trace a simple \Boltzmann distribution quot; stochastic action selector (SAS) as described in Lin [9] was used with temperature T set to 1.... ..."

Cited by 5

### Table 7: Partially Compressed Decision Table Rules

1997

"... In PAGE 25: ... As an example of the difference, consider a cost minimization problem where the knowledge source is a decision table. For illustration purposes, let us use the simple decision table depicted in Table7 . A joint approach such as [MM78] can use this decision table to find the optimal solution.... In PAGE 25: ... A separate approach requires that decision table reveal all possible compressed rules. Table7 is only partially compressed because it is missing a compressed rule1: I1=F, I2=F, I3=- (dash). The search space is larger for partially compressed decision tables because missing or implied rules must be discovered in the search process, but the knowledge source is easier to generate.... ..."

Cited by 4