### Table 2.4: Summary of Computational Results Using Lower Bound Based Branching Rules. From the tables we make the following observations: It is in general too costly to perform 10 dual simplex pivots on all fractional vari- ables. Strong branching can be highly e ective on some problems, but the e ectiveness is impacted greatly by the ability to select a suitable subset of variables on which to perform a number of dual simplex pivots. 11

1999

Cited by 33

### Table 6: Comparison of the quot;-relaxation algorithm on the CM-2 with network simplex on the CRAY Y-MP.

1991

"... In PAGE 26: ...4 Solving Large Scale Problems We now provide a summary of computational results in order to highlight certain aspects of the algorithm. Table6 reports the results we have obtained from solving all test problems ranging from thousands of arcs to 1 million arcs. During our experiments we found that PDS problems are relatively more di cult than all the others.... ..."

Cited by 1

### Table 4 shows the results on the NETLIB problems after implementing the above improvements. For a typical simplex iteration, the number of communication steps was reduced to three: the boss sends z, the workers send their pivots and correspond- ing columns, and the boss sends information for the update.

1995

"... In PAGE 11: ... Table4 : Improved results on local area network. Based upon Table 4, the implementation of 1.... In PAGE 11: ...Table 4: Improved results on local area network. Based upon Table4 , the implementation of 1.-6.... ..."

Cited by 5

### Table 1. Runtime in seconds for 100; 000 random points in dimension d: pivoting (solid line) and move-to-front (dotted line) (left). Runtime in seconds on regular d-simplex in dimension d (right).

1999

"... In PAGE 12: ... I have tested the algorithm on random point sets up to di- mension 30 to evaluate the speed of the method, in particular with respect to the relation between the pivoting and the move-to-front variant. Table1 (left) shows the respective runtimes for 100; 000 points randomly chosen in the d-dimensional unit cube, in logarithmic scale (averaged over 100 runs). All runtimes (excluding the time for generating and storing the points) have been obtained on a SUN Ultra-Sparc II (248 MHz), compiling with the GNU C++-Compiler g++ Version 2.... In PAGE 12: ... In this case, the number of support points is d. Table1 (right) shows... ..."

Cited by 15

### Table 1. Runtime in seconds for 100; 000 random points in dimension d: pivoting (solid line) and move-to-front (dotted line) (left). Runtime in seconds on regular d-simplex in dimension d (right).

1999

"... In PAGE 12: ... I have tested the algorithm on random point sets up to di- mension 30 to evaluate the speed of the method, in particular with respect to the relation between the pivoting and the move-to-front variant. Table1 (left) shows the respective runtimes for 100; 000 points randomly chosen in the d-dimensional unit cube, in logarithmic scale (averaged over 100 runs). All runtimes (excluding the time for generating and storing the points) have been obtained on a SUN Ultra-Sparc II (248 MHz), compiling with the GNU C++-Compiler g++ Version 2.... In PAGE 12: ... In this case, the number of support points is d. Table1 (right) shows... ..."

Cited by 15

### TABLE IV RULE REDUCTION RESULTS BY QR WITH COLUMN PIVOTING METHOD

### Table 1: Comparison of Revised and Standard Forms of the Simplex Method

"... In PAGE 4: ... In this approach Step 3, quot;pivot, quot; corresponds to the updating of the LU decomposition, and its periodic (usually at most every 100 iterations) reinitialization or refactorization. Table1 summarizes the qualitative differences between the standard and revised simplex method. ... ..."

### Table 2 The numerical behavior of the PARDISO solver (default options) with static pivot- ing and complete block diagonal supernode pivoting. #ref indicates the number of steps of iterative re nement, Err the error, Berr the backward error, and #piv the number of perturbed pivots. A * after the matrix name indicates that supernode pivoting is necessary in PARDISO to obtain convergence and CGS indicates that a conjugate gradient square algorithm has been used to improve the solution. a0 a1 indicates a failure of the method to factor a matrix satisfactorily.

2004

"... In PAGE 7: ... 3.1 Numerical accuracy The numerical behavior of the complete block diagonal supernode pivoting method in PARDISO is illustrated in Table2 . In all cases, an arti cial right- hand side a5 was used in the runs, so that the system a0a2a1 a3 a5 had the known solution a1 a3 a25 a1 a1 a29 with a1 a1 a3 a40 a1 a40 a2a4a3a5a2 a7 .... In PAGE 7: ... The iterative re nement was stopped when the componentwise relative backward error, Berr = a6 a35 a8a7 a1 a9 a10a12a11a14a13a12a15a16a9 a17 a18 a9 a10a19a9 a20a21a9 a11a22a9 a23a24a9 a15a16a9 a25a26a17 , [5] was close to machine precision or when Berr does not converge at least by a factor of 2 during one iteration. Table2 shows the number of steps of iter- ative re nement or conjugate gradient square iterations for static pivoting and complete block diagonal supernode pivoting in PARDISO with default options. The true error, Err=a7a13a7 a1 a5 a8 a28a27a30a29 a38 a1 a7a13a7a16a14 , the backward error, Berr, and the numbers of perturbed pivots are also reported.... In PAGE 7: ... The true error, Err=a7a13a7 a1 a5 a8 a28a27a30a29 a38 a1 a7a13a7a16a14 , the backward error, Berr, and the numbers of perturbed pivots are also reported. Some comments on the results in Table2 are in order. First, one of the ground rules for the experiments was that all input parameters of PARDISO were xed and not modi ed to accommodate the demands of individual matrices.... ..."

Cited by 25

### Table 1: Conjunctive Rule Extraction Algorithm.

1994

"... In PAGE 4: ... Subset returns true if all of the in- stances that are covered by the rule are members of the given class, and false otherwise. Our algorithm for extracting conjunctive rules from trained neural networks is outlined in Table1 . It is an adaptation of the classical algorithm for PAC-learning monotone DNF expressions (Valiant, 1984).... In PAGE 5: ...i := randomly-select(vi1,...,vin) calculate the total input s to output unit if s then return e impose random order on all feature values /* consider the values in order */ for each value vij if changing feature ei apos;s value to vij increases s ei := vij if s then return e more directed Examples oracle for the general case. Note that the algorithm shown in Table1 employs a stopping criterion to determine when a set of extracted rules provides a su ciently-good model of a network. There are several reasonable criteria that could be used here.... In PAGE 7: ... Table 3 outlines the algorithm we use to extract M-of-N rules from trained networks. In the same man- ner as the algorithm presented in Table1 , the rst step is to learn a conjunctive rule using the instance sup- plied by the Examples oracle. The algorithm then makes this conjunction into a trivial M-of-N rule for which M is set to N.... ..."

Cited by 67