### Table 14: Preconditioned convergence factors (pcf) for multi-level solution of the highly inde nite Helmholtz equation. The coarse grid equation is solved approximately by ten Kacmarz sweeps. CGS acceleration is used. N levels @ 1 2

1994

Cited by 5

### Table 2: Results for solution acceleration by using multi-level grids for a 48,000 point NACA0012 airfoil. The Euler solution is computed by implicit GMRES for a Mach number of 1.7. The algorithm switches to the next finer grid, if the residual has dropped to 10e-12 on the coarse grid level. A speedup of 1.68 is obtained by this techniques.

### Table 2. Statistics of the multi-level preconditioner

"... In PAGE 5: ... In the table, the 5th and 6th columns indicate the total number of Newton iterations and Krylov iterations used in the Newton loop and by the GMRES solver, respectively, before the simulation convergence is reached. The performance of the proposed multi-level preconditioner is summarized in Table2 on the same set of designs, where the total number of Krylov iterations corresponds to that is used by the top- level FGMRES solver. Different from the previous experiments, we have adopted a multi-level structure where the largest sub- problem size on the next level is approximately one fourth of that on the current level.... ..."

### Table 4. Multi-Level Threshold Results

"... In PAGE 7: ... Multi-Level Threshold Results Threshold Level 4 3 2 1 Value #28in Meters#29 30 18 8 2 Table 5. Multi-Level Threshold Values Table4 shows the number of PDUs generated and the average error in AOI and SR when our multi- level threshold dead reckoning algorithm is used. The threshold values used in di#0Berent levels are listed in Table 5.... In PAGE 7: ... The threshold values used in di#0Berent levels are listed in Table 5. It can be seen from Table4 that there is a great reduction in the average error in SR, compared to the average error in AOI. In our algorithm, if entity A is in entity B apos;s SR, a minimum threshold will be used in the dead reckoning so that B will receive A apos;s update packets most frequently.... ..."

### Table 5: MULTI-LEVEL MODELS

1998

"... In PAGE 28: ...xist with more time periods. To date, they are far from being solved. Computation on Multi-Level Instances Results for the ML-G instances are presented in Table 5. The results in Table5 show that at least on these simple academic models bc ? prod typically dominates bc ? opt and mp ? opt. This is due to the automatic conversion to an echelon stock formulation in combination with the path inequalities.... ..."

Cited by 4

### Table 2. Statistics of multi-level preconditioner

"... In PAGE 5: ... In the table, the 5th and 6th columns indicate the total number of Krylov iterations used in the Newton loop and by the GMRES solver, respectively, before the convergence is reached. The performance of the proposed multi-level preconditioner is summarized in Table2 . on the same set of designs, where the total number of Krylov iterations corresponds to that is used by the top- level FGMRES solver.... ..."

### Table 1: The number of iterations and the CPU time in seconds when a single-level Jacobi method was applied on h = 1=32 and accelerated by the multi-level RNM procedure.

1996

"... In PAGE 11: ... The number of the underlying relaxation sweeps is denoted by . For = 4; 6; 8, we recorded in Table1 the number of iterations and the required CPU time in seconds for the approximate solution converged to the required tolerance.... In PAGE 12: ... The word \stagnation quot; means that the iteration process stalled. We note from Table1 that the application of the RNM procedure on (1) approxi- mately reduced the iteration by half and the acceleration rate seems independent of the number of underlying iterations. Because we applied RNM every relaxation sweeps, this implementation was shown to be e cient in the sense that the CPU times were also reduced.... ..."

### Table 1: The number of iterations and the CPU time in seconds when a single-level Jacobi method was applied on h = 1=32 and accelerated by the multi-level RNM procedure.

"... In PAGE 11: ... The number of the underlying relaxation sweeps is denoted by . For = 4; 6; 8, we recorded in Table1 the number of iterations and the required CPU time in seconds for the approximate solution converged to the required tolerance. Column 1 denotes the level (levels) on which the RNM parameter (l) was evaluated; 0 value means there was no RNM procedure applied; 1 + 2 means that the RNM procedure was applied on (1) and (2), and so on.... In PAGE 11: ... The word \stagnation quot; means that the iteration process stalled. We note from Table1 that the application of the RNM procedure on (1) approxi- mately reduced the iteration by half and the acceleration rate seems independent of the number of underlying iterations. Because we applied RNM every relaxation sweeps, this implementation was shown to be e cient in the sense that the CPU times were also reduced.... ..."

### Table 1: Evaluations of cost functionals for complete solution and one-shot method combined with multi-level method. We can see, in Figure 11, the successive shapes for the one-shot method combined with multi-level parametrization with 3=7=15 parameters. There is convergence to the desired shape after 400 \optimization quot; iterations (1000 sec. CPU). Figure 12-a, shows two-grids-ideal method (where we have alternated with 7 and 15 parameters) which consists in making one optimization iteration on the ne level and solve completely the problem on the coarse level. We still use V-cycles. We see on Figure 12-b that the method becomes \ideal quot; when we relax the coarse level with at least 450 iterations. It implies that at the

1993

"... In PAGE 21: ... For one-shot method, we have used V-cycles alternating on the 3 previous levels. There is a gain of 35 as regards the cost functionals evaluation ! In Table1 , we can see the di erence between complete solution and one- shot method combined with multi-level parametrization. -6 -5 -4 -3 -2 -1 0 1 0 100 200 300 400 500 600 700 800 Log (cost functional) WU (=cost functional evaluation) Complete resolution 15 parameters 7 parameters Simultaneous resolution 15 parameters7 parameters Figure 8: Comparison between one level One-Shot and Complete Resolution... ..."