### TABLE 2. Iteration counts for preconditioned Krylov itera- tion on elliptic problems

1999

Cited by 3

### Table 1: Iteration count scaling of Schwarz-preconditioned Krylov methods Iteration Count

1999

"... In PAGE 4: ...o nonsymmetric problems, e.g., GMRES. Krylov-Schwarz iterative methods typically converge in a number of iterations that scales as the square-root of the condition number of the Schwarz-preconditioned system. Table1 lists the expected number of iterations to achieve a given reduction ratio in the resid- ual norm. (Here we gloss over unresolved issues in 2-norm and operator-norm convergence de nitions, but see [4].... In PAGE 5: ... global reduction time of CP1=d. Assume no coarse-grid solve. For simplicity, we neglect the cost of neighbor-only communication relative to arithmetic and global reductions. The rst line of Table 2 shows the estimated execution time per iteration in the left column and the overall execution time (factoring in the number of iterations for 1-level additive Schwarz from Table1 ) in the right column. All of the work terms (matrix-vector multiplies, subdomain preconditioner sweeps or incomplete factorizations, DAXPYs, and local summations of inner product computations) are contained in A, and, since it is given in units of time, A also re ects per-processor oating-point performance, including local memory system e ects.... ..."

Cited by 17

### Table 2: Execution time scaling of Schwarz-preconditioned Krylov methods Time per iter. Time overall

1999

"... In PAGE 5: ... global reduction time of CP1=d. Assume no coarse-grid solve. For simplicity, we neglect the cost of neighbor-only communication relative to arithmetic and global reductions. The rst line of Table2 shows the estimated execution time per iteration in the left column and the overall execution time (factoring in the number of iterations for 1-level additive Schwarz from Table 1) in the right column. All of the work terms (matrix-vector multiplies, subdomain preconditioner sweeps or incomplete factorizations, DAXPYs, and local summations of inner product computations) are contained in A, and, since it is given in units of time, A also re ects per-processor oating-point performance, including local memory system e ects.... In PAGE 5: ... A fuller model would contain a term of the form B(N=P)2=3. The second line of Table2 shows the optimal number of processors to employ on a problem of size N, based on the parallel complexity in the rst line. The work term falls in P and the communication term rises; setting the P-derivative of their sum to zero yields the P that minimizes overall execution time.... ..."

Cited by 17

### Table 2 Schwarz methods combined with preconditioned Newton-Krylov matrix-free methods.

1998

"... In PAGE 3: ... The treatment of the boundary conditions is implicit and the CFL number is equal to 100. In Table2 , we present the iteration count and CPU time (in seconds) for steady transonic ow at convergence using Schwarz algorithms in combination with Newton-Krylov matrix-free methods. The treatment of the boundary conditions is also implicit and the starting CFL number is 30.... ..."

Cited by 3

### Table 1 Iteration count scaling of Schwarz-preconditioned Krylov methods Iteration Count Preconditioning in 2D in 3D Point Jacobi O(N1=2) O(N1=3) Subdomain Jacobi

1999

"... In PAGE 4: ...eneralizations to nonsymmetric problems, e.g., GMRES. Krylov-Schwarz iterative methods typically converge in a number of iterations that scales as the square-root of the condition number of the Schwarz-preconditioned system. Table1 lists the expected number of iterations to achieve a given reduction ratio in the residual norm. (Here we gloss over unresolved issues in 2-norm and operator-norm convergence de nitions, but see [CZ98].... In PAGE 5: ...P1=d. Assume no coarse-grid solve. For simplicity, we neglect the cost of neighbor- only communication relative to arithmetic and global reductions. The rst line of Table 2 shows the estimated execution time per iteration in the left column and the overall execution time (factoring in the number of iterations for 1-level additive Schwarz from Table1 ) in the right column. All of the work terms (matrix- vector multiplies, subdomain preconditioner sweeps or incomplete factorizations, DAXPYs, and local summations of inner product computations) are contained in A, and, since it is given in units of time, A also re ects per-processor oating- point performance, including local memory system e ects.... ..."

Cited by 17

### Table 8: Integration statistics for EB, ROS2 and IRKC, with full Newton solver if needed, and both direct and iterative linear solvers.

2007

"... In PAGE 116: ... The 16 reactive gas phase species satisfy a gas phase reaction system of 25 reactions. The already successful transient results [3] with direct linear solver, are substantially accelerated with Krylov solvers; see Table8 . Note that application of Krylov subspace methods causes a slight increase in the number of function evaluations and Newton iterations/Jacobian evaluations for EB.... ..."

### Table 1: Tridiagonal preconditioning for 1 (example 1). Iteration Approximate Eigenvalue Linear Systems j

### Table 7: Number of matrix-vector products without preconditioning.

2000

"... In PAGE 13: ... Finally, it is worth pointing out that, for 27 out of the 31 matrices tested previously, none of the three Krylov subspace methods without a preconditioner converged within 300 matrix-vector products (MVP). Table7 lists the number of MVPs for the 4 matrices that were solved by the unpreconditioned iterative solvers. Our test results show that these Krylov subspace methods are much less e cient and in most cases are useless for solving the given problems without preconditioning.... ..."

Cited by 24

### Table 1: Regular grid: asymptotic convergence rates for a linear multigrid iteration with a V(1,1)-cycle and resulting number of inner iterations Note that due to (5.10) the multigrid iterates are contained in Vh and that the problem (3.8) is elliptic in Vh. Thus, the multigrid iteration can be accelerated by embedding it into a cg-iteration, i.e., a cg-iteration preconditioned by a multigrid V-cycle is performed. There is another advantage. It is often di cult to nd the optimal damping factor . In particular, when the discretization with the slightly distorted grid in Figure 1 is used, the Jacobi smoother diag(A) is not convergent without appropriate damping. Thus, the cg-method is used here for computing 21

1999

Cited by 30

### Table 1 The average cost per Krylov dimension. Method Computational Cost Memory MV AXPY DOT requirement

"... In PAGE 22: ... The costs of BiCGstab(`) variants. Table1 gives the average cost to increase Krylov subspace dimension by one for the separate parts of the algorithms. For the Bi-CG part we have counted the long vector operations for this part (in the schemes 1 and 3 the operations before the horizontal line): in the POL part we also took into account the vector updates needed for the linear combination to form rk+`, : : :.... In PAGE 22: ... : :. All scalar operations have been neglected. \Power basis quot;, \Orthogonal basis quot;, and \Stabilized matrix quot; refers to the implementation of BiCGstab(`) as discussed in 5, 6, and 7, respectively. The maximum number of long vectors that have to be stored (including b) is listed in the last column of Table1 in the rows for the POL part (since the required memory space is not additive (as is the computational cost) we have not listed the required space per part). In contrast to the other approaches, the orthogonal basis approach requires a few multiplications by AT (`?1 for the complete iteration process).... ..."

Cited by 1