### Table 3 indicates the parameters that must be initialized before proceeding with the iterative analysis to solve for the lateral forces.

"... In PAGE 4: ...Table 3 indicates the parameters that must be initialized before proceeding with the iterative analysis to solve for the lateral forces. Table3 : Initialization of Lateral Force Loop Parameter Description Init. Value FyRF Right front wheel load 0 FyLR Left rear wheel load 0 FyRR Right rear wheel load 0 FyF Lateral force front axle 0 FyR Lateral force rear axle 0 Fy Lateral force 0 uOld Velocity last iteration 0 MaxAlphaF Max front slip angle 0 MaxAlphaR Max rear slip angle ... ..."

### Table 1 Convergence history for the rst example. As can be seen from the Table 1, the convergence of the the Gauss-Seidel sweeps is initially very rapid, and then slows down. In Figure 4, we can see that the mesh changes substantially during the rst four sweeps, and much less in later iterations. In this example, after 12 iterations, neither E or the mesh changes very much. We 11

1997

"... In PAGE 11: ... Each time new second derivatives were computed, we evaluated the functional E2 = X t2T s(t) 4p3q(t); (18) where s(t) is de ned in (9) and q(t) is de ned in (1). In Table1 , we summarize the convergence history for 40 iterations in terms of E, and in Figure 1 we show the initial mesh and the smoothed meshes after 4, 8, and 12... ..."

Cited by 50

### Table 2: Iterations and cpu times to achieve various percent reductions in imbalance.

1994

"... In PAGE 16: ... The convergence history plots given in Figures 10-12 show a sharp drop in the imbalance within the rst few iterations and slower drops in later iterations. Table2 further present the performance... ..."

Cited by 17

### Table 1. Evolution strategy compared to simulated annealing. First runs with stan- dard parameterizations discussed later on. Best value from 5,000 iterations. The object variables xi were initialized as shown in Eq. 4.

### Table 2. Refactorings applied to JHotDraw at each iteration.

2005

"... In PAGE 8: ...2%) are applicable without any need for OO transformation. Table2 shows the number of each refactoring applied during each iteration. One reason for deferring a refactoring to a later iteration is the hope that another refactoring will enable it without the need for OO transformation.... ..."

Cited by 10

### Table 2. Refactorings applied to JHotDraw at each iteration.

2005

"... In PAGE 8: ...2%) are applicable without any need for OO transformation. Table2 shows the number of each refactoring applied during each iteration. One reason for deferring a refactoring to a later iteration is the hope that another refactoring will enable it without the need for OO transformation.... ..."

Cited by 10

### Table 2 shows that for a larger (but xed) bandwidth the algorithm also stops converging. However, this happens at a later stage (around iteration 9 instead of iteration 5). In this case, a decrease in the reduction of the residual is just starting in the last few iterations. A phenomenon similar to this was described in [17] in the context of quasi-interpolation with Gaussian kernels, and referred to as approximate approximation by the authors. The fact that the `2 errors are on the order of machine pre- cision is no reason for concern, since the absolute value of the error is still on the order of 10?7, and the `2 errors are computed via 1=n P jerrorj2.

in Algorithms Defined by Nash Iteration: Some Implementations via Multilevel Collocation and Smoothing

"... In PAGE 16: ... Table2 : Newton iteration without smoothing. Constant (but larger) bandwidth.... ..."

### Table 3: LER from multiple MMIE iterations.

2001

"... In PAGE 5: ...7, which proved to be approximately optimal for later iterations as well. Table3 compares the results of N_MMIE and C_MMIE in multiple iterations. Table 3: LER from multiple MMIE iterations.... In PAGE 5: ... Table3 shows that N_MMIE reaches the best performance at the third iteration, which reduces LER by 21.6% (relative) with respect to MLE.... ..."

Cited by 4

### Table 5 shows, for each test problem, the number of standard updates and the amount of time that VI1 took when non-LP point-based update was used (together with the standard backup operator). Comparing the statistics with those for point-based update (Tables 1 and 2), we see that the number of standard updates is increased on all test problems and the amount of time is also increased except for the rst three problems. Here are the plausible reasons. First, it is clear that non-LP point-based update does not improve a set of vectors as much as point-based update. Consequently, it is less e ective in reducing the number of standard updates. Second, although it does not solve linear programs, non-LP point-based update produces extraneous vectors. This means that it might need to deal with a large number of vectors at later iterations and hence might not be as e cient as point-based update after all.

2001

"... In PAGE 16: ... Table5 : Number of Standard DP Updates and Time That VI1 Took When Non-LP Point- Based Update is Used. Extraneous vectors can be pruned.... In PAGE 16: ... In that paper, we have also explored the combination of non-LP point-based update with the MPI backup operator. Once again, the results were not as good as those in Table5 . The reason is that the MPI backup operator further compromises the quality of point-based update.... ..."

Cited by 40

### Table 6.1 shows the computational costs for verifying the circuit with different values of r, that is the cost of a complete verification of the circuit when the dealer gets at most r cards. The time is shown in seconds. Note that the most cards a dealer can get in a multi-pack card is 11 (2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1), and, moreover, in many versions of blackjack a hand of five cards beats all hands except an ace and a face card. As can be seen the cost does increase as r increases, but the costs are quite reasonable. We expect the performance to be slightly worse than linear because as r increases, the time the circuit must be exercised increases linearly, but later iterations tend to take longer than earlier ones. The verification was completed on an SGI R4400 Indy. In general, our methodology allows us to trade-off computation and human costs appropriately. In this example, verification can be done by trajectory evaluation alone, and no human intervention is necessary. As we shall see in subsequent examples, sometimes we need to reduce the computational costs by having more human intervention.

1997

Cited by 26