### Table 6.1: Software to solve sparse linear systems using direct methods.

1996

### Table 1. Computation times for solving the linear relaxation.

"... In PAGE 3: ... Computation times for solving the linear programming relaxation by different algorithms available with the CPLEX 8.0 package are given in Table1 . The problem is very degenerated and requires the use of perturbations, leading to large computation times.... In PAGE 3: ... The interior point algorithm was the best solution strategy. We also can see in Table1 that the new formulation considerably reduced the computation time of the interior points algorithm, making possible an efficient implementation of the cutting plane algorithm. Table 1.... ..."

### Table 1. Computation times for solving the linear relaxation.

"... In PAGE 3: ... Computation times for solving the linear programming relaxation by different algorithms available with the CPLEX 8.0 package are given in Table1 . The problem is very degenerated and requires the use of perturbations, leading to large computation times.... In PAGE 3: ... The interior point algorithm was the best solution strategy. We also can see in Table1 that the new formulation considerably reduced the computation time of the interior points algorithm, making possible an efficient implementation of the cutting plane algorithm. Results obtained with algorithms B amp;B-ANFP and B amp;C-ANFP are given in Ta- ble 2.... ..."

### Table 2: Performance comparison of di erent iteration methods without scaling the linear systems (4t = 0:01).

"... In PAGE 14: ...2 Performance comparison of iterative methods Since iterative methods are usually needed in high resolution 3D simulations, we examine the performance of a few iterative techniques that we discussed in Section 4 for solving the large sparse linear systems arising from discretized 3D microscale heat transport equation with 4t = 0:01. In Table2 , we give the average number of iterations per linear linear system solution (per time step) and CPU time in seconds for the entire simulation for the Gauss-Seidel, SOR with optimal overrelaxation parameters, CG, and PCG. In this set of tests, the linear systems were not scaled to have a unit diagonal.... In PAGE 15: ...iterative methods, we also list the total CPU time in seconds to make the comparisons more informative. We remark that the cost of constructing the incomplete Cholesky preconditioner, which is negligible as we will see later, is not counted in the total CPU time for PCG reported in Table2 . For reference purpose, the values of the (tested) optimal relaxation parameter for SOR are ! = 1:58 for N = 11, ! = 1:75 for N = 21, ! = 1:82 for N = 31, and ! = 1:86 for N = 41.... In PAGE 15: ... For reference purpose, the values of the (tested) optimal relaxation parameter for SOR are ! = 1:58 for N = 11, ! = 1:75 for N = 21, ! = 1:82 for N = 31, and ! = 1:86 for N = 41. The data in Table2 indicate that both Gauss-Seidel and SOR methods are not very scalable with respect to the problem size for solving the discretized 3D microscale heat transport equation. The CPU timings of both Gauss-Seidel and SOR methods are very large for large values of N.... In PAGE 16: ... Since the coe cient matrix is strongly diagonally dominant, the scaling makes the magnitudes of all o diagonal entries less than unit. For the values of N corresponding to Table2 the average numbers of CG iterations with matrix scaling are 1:91 for N = 11, 3:27 for N = 21, 4:47 for N = 31, and 5:93 for N = 41. We do see that the scaling of the matrix entries may improve CG convergence rate (N = 21 and N = 41), or deteriorate CG convergence rate (N = 31), or not a ect its convergence rate (N = 11).... ..."

### Table 1: Large sparse sets of equations

1991

"... In PAGE 5: ... Substantial performance improvements can be made fairly easily, even without using assembly language. Table1 describes the linear systems that were used in testing the algorithms. Data sets A through J were kindly provided by A.... In PAGE 10: ...coe cients. The largest of the systems in Table1 had b . 107, so this did not cause any problems on the machine that was used.... ..."

Cited by 46

### Table 1: Large sparse sets of equations

1991

"... In PAGE 17: ...coe cients. The largest of the systems in Table1 had b . 107, so this did not cause any problems on the machine that was used.... In PAGE 22: ... Substantial performance improvements can be made fairly easily, even without using assembly language. Table1 describes the linear systems that were used in testing the algorithms. Data sets A through J were kindly provided by A.... ..."

Cited by 46

### Table 1: Large sparse sets of equations

1991

"... In PAGE 5: ... Substantial performance improvements can be made fairly easily, even without using assembly language. Table1 describes the linear systems that were used in testing the algorithms. Data sets A through J were kindly provided by A.... In PAGE 10: ...coe cients. The largest of the systems in Table1 had b . 107, so this did not cause any problems on the machine that was used.... ..."

Cited by 46

### Table 5: The break-even point for using M for a xed value of q expressed in number of PCGLS iterations. Interior-point iteration = 2. likely to use less ops than PDN method if q and maximum number of iterations per step are carefully chosen. For most problems we do not get close to these break-even numbers since we terminate after fewer iterations (except for czprob and scsd8 problems). The results in Table 6 show that Mixed PDN is a promising method for solving large sparse linear programming problems. The cpu-time for this MATLAB implementation are better for the mixed method than for the method only using a direct method for solving the linear system. The mixed method approach can be extended to the predictor-corrector method. Problem PDN

1999

Cited by 2