### Table 1. Runtime in seconds for 100; 000 random points in dimension d: pivoting (solid line) and move-to-front (dotted line) (left). Runtime in seconds on regular d-simplex in dimension d (right).

1999

"... In PAGE 12: ... I have tested the algorithm on random point sets up to di- mension 30 to evaluate the speed of the method, in particular with respect to the relation between the pivoting and the move-to-front variant. Table1 (left) shows the respective runtimes for 100; 000 points randomly chosen in the d-dimensional unit cube, in logarithmic scale (averaged over 100 runs). All runtimes (excluding the time for generating and storing the points) have been obtained on a SUN Ultra-Sparc II (248 MHz), compiling with the GNU C++-Compiler g++ Version 2.... In PAGE 12: ... In this case, the number of support points is d. Table1 (right) shows... ..."

Cited by 15

### Table 1. Runtime in seconds for 100; 000 random points in dimension d: pivoting (solid line) and move-to-front (dotted line) (left). Runtime in seconds on regular d-simplex in dimension d (right).

1999

"... In PAGE 12: ... I have tested the algorithm on random point sets up to di- mension 30 to evaluate the speed of the method, in particular with respect to the relation between the pivoting and the move-to-front variant. Table1 (left) shows the respective runtimes for 100; 000 points randomly chosen in the d-dimensional unit cube, in logarithmic scale (averaged over 100 runs). All runtimes (excluding the time for generating and storing the points) have been obtained on a SUN Ultra-Sparc II (248 MHz), compiling with the GNU C++-Compiler g++ Version 2.... In PAGE 12: ... In this case, the number of support points is d. Table1 (right) shows... ..."

Cited by 15

### Table 2 presents a summary of the features of ATL currently supported by the compiler and some features that could be implemented as future extensions. Stars indicate the supported features. An explanation of some of the features is given in the numbered list after the table. ATL feature Current version Future extensions

"... In PAGE 26: ... Table2 ATL features summary (1) Such reference helpers could be used to optimize source model decoration: instead of explicitly linking A to B and B to A, only one direction would have to be initialized. For instance, instead of: helper context A def: b : B = B.... ..."

### Table 9: Algorithmic Version of LF

1993

"... In PAGE 35: ...context ?, respectively. The rules of derivation for these assertions appear in Table9 . These rules make use of a function NF(U) which yields the normal form of an expression U with respect to the leftmost-outermost reduction strategy.... In PAGE 35: ...ontext ?, respectively. The rules of derivation for these assertions appear in Table 9. These rules make use of a function NF(U) which yields the normal form of an expression U with respect to the leftmost-outermost reduction strategy. Several of the rules given in Table9 make use of NF in the conclusion of the rule. We temporarily adopt the convention that such a rule does not apply unless the required normal form exists, for it will be a direct consequence of the soundness theorem given below that the normal forms in question will always exist.... ..."

Cited by 571

### Table 2.4: Summary of Computational Results Using Lower Bound Based Branching Rules. From the tables we make the following observations: It is in general too costly to perform 10 dual simplex pivots on all fractional vari- ables. Strong branching can be highly e ective on some problems, but the e ectiveness is impacted greatly by the ability to select a suitable subset of variables on which to perform a number of dual simplex pivots. 11

1999

Cited by 33

### Table 2 The numerical behavior of the PARDISO solver (default options) with static pivot- ing and complete block diagonal supernode pivoting. #ref indicates the number of steps of iterative re nement, Err the error, Berr the backward error, and #piv the number of perturbed pivots. A * after the matrix name indicates that supernode pivoting is necessary in PARDISO to obtain convergence and CGS indicates that a conjugate gradient square algorithm has been used to improve the solution. a0 a1 indicates a failure of the method to factor a matrix satisfactorily.

2004

"... In PAGE 7: ... 3.1 Numerical accuracy The numerical behavior of the complete block diagonal supernode pivoting method in PARDISO is illustrated in Table2 . In all cases, an arti cial right- hand side a5 was used in the runs, so that the system a0a2a1 a3 a5 had the known solution a1 a3 a25 a1 a1 a29 with a1 a1 a3 a40 a1 a40 a2a4a3a5a2 a7 .... In PAGE 7: ... The iterative re nement was stopped when the componentwise relative backward error, Berr = a6 a35 a8a7 a1 a9 a10a12a11a14a13a12a15a16a9 a17 a18 a9 a10a19a9 a20a21a9 a11a22a9 a23a24a9 a15a16a9 a25a26a17 , [5] was close to machine precision or when Berr does not converge at least by a factor of 2 during one iteration. Table2 shows the number of steps of iter- ative re nement or conjugate gradient square iterations for static pivoting and complete block diagonal supernode pivoting in PARDISO with default options. The true error, Err=a7a13a7 a1 a5 a8 a28a27a30a29 a38 a1 a7a13a7a16a14 , the backward error, Berr, and the numbers of perturbed pivots are also reported.... In PAGE 7: ... The true error, Err=a7a13a7 a1 a5 a8 a28a27a30a29 a38 a1 a7a13a7a16a14 , the backward error, Berr, and the numbers of perturbed pivots are also reported. Some comments on the results in Table2 are in order. First, one of the ground rules for the experiments was that all input parameters of PARDISO were xed and not modi ed to accommodate the demands of individual matrices.... ..."

Cited by 25

### Table 1. Given are the average computation time and the average solution values when comparing best vs. first improvement pivoting rules on all 200 job single processor benchmark problems using three different initialisation schemes.

2003

"... In PAGE 5: ... Both, best and first improvement pivoting rules as well as several other variations were considered in some preliminary experiments. First improvement re- turned slightly better solutions than best improvement, but it was significantly slower than the best-improvement algorithm, once the local search had been optimised (see Table1 for sample results on single processor instances). Since in any ILS algorithm, local search has to be applied very frequently, we decided to use the significantly faster best-improvement local search.... ..."

Cited by 1

### Table 4 shows the results on the NETLIB problems after implementing the above improvements. For a typical simplex iteration, the number of communication steps was reduced to three: the boss sends z, the workers send their pivots and correspond- ing columns, and the boss sends information for the update.

1995

"... In PAGE 11: ... Table4 : Improved results on local area network. Based upon Table 4, the implementation of 1.... In PAGE 11: ...Table 4: Improved results on local area network. Based upon Table4 , the implementation of 1.-6.... ..."

Cited by 5

### Table 4 dimension ABS { LU pivoting ABS { LU

"... In PAGE 13: ... However, this algorithm has two advantages in comparison with these two algorithms: i) it uses smaller amount of numerical operations and ii) it does not require the strong nonsingularity of A. From the results in Table 3 and Table4 we can conclude that the pivoting algorithm demonstrates an improvement of the accuracy of the solution. There is the next possibility to modify this algorithm.... ..."

### Table 2. LU factorization times on a single CPU (in seconds) for UMFPACK Version 2.2, SuperLUMT , SPOOLES, SuperLUdist, MUMPS, WSMP, and UMFPACK Version 3.0, respec- tively. The best pre-2000 time is underlined and the overall best time is shown in boldface. The last row shows the approximate smallest pivoting threshold that yielded a residual norm close to machine precision after iterative re nement for each package. FM indicates that a solver ran out of memory, FC indicates an abnormal or no termination, and FN indicates that the numerical results were inaccurate.

"... In PAGE 4: ... A maximum of 2 GB of memory was available to each code. Table2 shows the LU factorization time taken by each code for the matrices in our test suite.... In PAGE 5: ... Subscripts M, C, and N indicate failure due to running out of memory, abnormal or no termination, and numerically inaccurate results, respectively. One of the ground rules for the experiments reported in Table2 was that all input parameters that may in uence the behavior of a program were xed and were not modi ed to accommodate the demands of individual matrices. However, through a series of pre-experiments, we attempted to x these parameters to values that yielded the best results on an average on the target machine.... In PAGE 6: ... 5 easily accessible in the software, such as the various block sizes in SuperLU, were also xed to values that appeared to be the best on average. Note that some of the failures in the rst four columns of Table2 can xed by changing some of the options or parameters in the code. However, as noted above, the options chosen to run the experiments reported in Table 2 are such that they are best for the test suite as a whole.... In PAGE 6: ... Note that some of the failures in the rst four columns of Table 2 can xed by changing some of the options or parameters in the code. However, as noted above, the options chosen to run the experiments reported in Table2 are such that they are best for the test suite as a whole. Changing these options may avoid some failures, but cause many more.... In PAGE 6: ... Therefore, such failures are artifacts of the implementation and neither re ect the actual amount of memory needed (if allocated properly) nor that the underlying algorithms are not robust. The best factorization time for each matrix using any solver released before year 2000 is underlined in Table2 and the overall best factorization time is shown in boldface. Several interesting observations can be made in this Table 2.... In PAGE 6: ... The best factorization time for each matrix using any solver released before year 2000 is underlined in Table 2 and the overall best factorization time is shown in boldface. Several interesting observations can be made in this Table2 . Perhaps the most striking observation in the table pertains to the range of times that di erent packages available before 1999 would take to factor the same matrix.... In PAGE 6: ... Also noticeable is the marked increase in the reliability and ease of use of the softwares released in 1999 or later. There are 21 failures in the rst four columns of Table2 and only two in the last three columns. MUMPS is clearly the fastest and the most robust amongst the solvers released before 2000.... In PAGE 8: ... |Pivoting strategy: Threshold pivoting implemented by row-exchanges. The performance of the solvers calibrated in Table2 is greatly a ected by the al- gorithmic features outlined above. We now brie y describe, in order of importance, the relationship between some of these algorithms and the performance character- istics of the solvers that employ these algorithms.... In PAGE 8: ... Moreover, two di erent column orderings, both equally e ective in reducing ll in the symmetric factorization of AT A, could enjoy very di erent degrees of success in reducing the ll in the LU factorization of A. There is some evidence of this being a factor in the extreme variations in the factorization times of di erent solvers for the same matrices in Table2 . The matrices that have a symmetric structure and require very little pivoting, such as nasasrb, raefsky3, rma10, venkat50, and wang4 exhibit relatively less variation in the factorization times of di erent solvers.... In PAGE 23: ... CONCLUDING REMARKS In this paper, we show that recent sparse solvers have signi cantly improved the state of the art of the direct solution of general sparse systems. For instance, com- pare the rst four columns of Table2 with the second last column of Table 3. This comparison would readily reveal that a state-of-the-art solver running on to- day apos;s single-user workstation is easily an order of magnitude faster than the best solver-workstation combination available prior to 1999 for solving sparse unsym- metric linear systems.... ..."