### Table 2: Results for Minimax Problems

1997

"... In PAGE 40: ... Whether FFSQP-AL (0) or FFSQP-NL (1) is used is indicated in column \B quot;. Results obtained on selected minimax problems are summarized in Table2 . Problems bard, davd2, f amp;r, hettich, and wats are from [11]; cb2, cb3, r-s, wong and colv are from [12; Examples 5.... In PAGE 40: ... The gradients of all the functions were computed by nite di erence approximation except for polk1 through polk4 for which gradients were computed analytically. In Table2 , the meaning of columns B, nparam, nineqn, ncallf,ncallg, iter, d0norm and eps are as in Table 1 (but ncallf is the total number of evaluations of scalar objective functions). nf is the number of objective functions in the max, and objmax is the nal value 5The results listed in the tables were actually obtained using CFSQP, the C version.... ..."

Cited by 11

### Table 1: Minimax Information States

1999

"... In PAGE 13: ... The #0Crst step is to generate all information states of interest using #2815#29, beginning with s 0 =#5B0; 0#5D. The result is shown in Table1 . Next, we use the dynamic programming equations #2820#29, #2821#29 to determine the value function and the optimal control for each information state of interest.... ..."

Cited by 2

### Table 3. Algorithm: Minimax-Q and Nash-Q. The difference be- tween the algorithms is in the Value function and the Q values. Minimax-Q uses the linear programming solution for zeros-sum games and Nash-Q uses the quadratic programming solution for general-sum games. Also, the Q values in Nash-Q are actually a vector of expected rewards, one entry for each player.

2000

"... In PAGE 4: ....2.1 MINIMAX-Q Littman (1994) extended the traditional Q-Learning algo- rithm for MDPs to zero-sum stochastic games. The al- gorithm is shown in Table3 . The notion of a Q function is extended to maintain the value of joint actions, and the backup operation computes the value of states differently.... In PAGE 5: ....2.2 NASH-Q Hu amp; Wellman (1998) extended the Minimax-Q algorithm to general-sum games. The algorithm is structurally identi- cal and is also shown in Table3 . The extension requires that each agent maintain Q values for all the other agents.... ..."

Cited by 17

### Table 3. Algorithm: Minimax-Q and Nash-Q. The difference be- tween the algorithms is in the Value function and the Q values. Minimax-Q uses the linear programming solution for zeros-sum games and Nash-Q uses the quadratic programming solution for general-sum games. Also, the Q values in Nash-Q are actually a vector of expected rewards, one entry for each player.

2000

"... In PAGE 4: ....2.1 MINIMAX-Q Littman (1994) extended the traditional Q-Learning algo- rithm for MDPs to zero-sum stochastic games. The al- gorithm is shown in Table3 . The notion of a Q function is extended to maintain the value of joint actions, and the backup operation computes the value of states differently.... In PAGE 4: ....2.2 NASH-Q Hu amp; Wellman (1998) extended the Minimax-Q algorithm to general-sum games. The algorithm is structurally identi- cal and is also shown in Table3 . The extension requires that each agent maintain Q values for all the other agents.... ..."

Cited by 17

### Table 2: Numerical values for the approximate quadratic and cubic models; restricted minimax densities.

"... In PAGE 7: ... Explicit descriptions of the densities are in Table 2. See Figure 2 for plots in the quadratic and cubic cases, with values of the constants in Table2 . As noted previously by Studden (1977) for variance-minimising designs, and Wiens (2000) for mse-minimising designs, the Q-andD-optimal designs are very similar.... ..."

### Table 2: Results for Minimax Problems with CFSQP

1997

"... In PAGE 63: ...FSQP-AL (0) or FSQP-NL (1) is used is indicated in column \B quot;. Results obtained on selected minimax problems are summarized in Table2 . Problems bard, davd2, f amp;r, hettich, and wats are from [16]; cb2, cb3, r-s, wong and colv are from [17; Examples 5.... In PAGE 63: ... The gradients of all the functions were computed by nite di erence approximation except for polk1 through polk4 for which gradients were computed analytically. In Table2 , the meaning of columns B, nparam, nineqn, ncallf,ncallg, iter, d0norm and eps are as in Table 1 (but ncallf is the total number of evaluations of scalar objective functions). nf is the number of objective functions in the max, and objmax is the nal value of the max of the objective functions.... ..."

Cited by 33

### Table 1. Radial basis functions and Fourier transforms

1999

"... In PAGE 2: ...2) is symmetric and positive de nite on (Pd m)?. Table1 shows some conditionally positive de nite functions with their minimal orders m. Any functional 2 (Pd m)? of the form (2.... In PAGE 9: ...5) that makes the integral well{de ned near zero. Table1 shows the functions ^ for various choices of . As a referee correctly pointed out, the assumption (4.... ..."

Cited by 24

### Table 1: Radial basis functions and Fourier transforms

"... In PAGE 4: ...2) is symmetric and positive de nite on (IP d m)?. Table1 shows some conditionally positive de nite functions with their minimal orders m. Any functional 2 (IP d m)? of the form (2.... In PAGE 11: ...5) that makes the integral well{de ned near zero. Table1 shows the functions ^ for various choices of . As a referee correctly pointed out, the assumption (4.... ..."