### TABLE 2. Steps for computing optimal partitions with Rmax D 4

1997

Cited by 2

### Table 3.1 Computation times for optimization using a quadratic RCLF with a constant P matrix.

### Table 3. Runtime for the automatically vectorized and optimized vectorial codes for the covariance matrix computation stage

### Table 3. Computational parameters for matrix-vector multiplication with superedges

1998

"... In PAGE 7: ... One alternative to optimize the balance of flops/ia is to reordering the nodal points in such way that memory contention is minimized [7]. Improvements in flops and ia parameters are shown in Table3 for several superedges groupings. However, observe that these gains are not for the whole mesh, since in several unstructured meshes just part of the edges can be grouped into these superedge arrangements [1, 9].... In PAGE 7: ...As one can see in Table3 , the balance of flops/ia increases with the edge grouping indicating great advantages over the simple edge. However, cache size and register availability may be observed in order to best fit flops and ia operations in the architecture, taking into account code simplicity and maintenance.... ..."

Cited by 1

### Table 5.10: Performance of the reference and optimized sparse matrix-vector multiplication routines on the Intel Pentium 4 computer.

### Table 1: Computational Optimization

in Nomenclature

"... In PAGE 18: ...5 sec, which showed the most variation, are presented here. Table1 summarizes the #0Cndings of this optimization process. The baseline #28#0Crst line of the table#29 against which the #0Dux calculation is judged is a detailed spectrum case with na =1500, nppl =15, nv =10, and crit =0.... In PAGE 18: ...The prediction of the radiative #0Dux is a#0Bected by less than 5 percent for each set of parameters, except for the two occurrences shown in Table1 at 1637.5 sec.... In PAGE 19: ... When accuracy is more important than saving CPU time, it is easy to change these parameters to improve the result. Excitation Calculation The last column in Table1 shows the fraction of CPU time used by the radiation calculation.... ..."

### Table 5: Cost Matrix

in ABSTRACT

"... In PAGE 7: ...9 0 Table 4 is the matrix representing the degree of misclassification derived from the fault criticality ranking based on the strategies discussed in Section 3. The final cost matrix calculated from the degree of misclassification using Equation 3 is shown in Table5 , with m=0.9 and S=100.... In PAGE 8: ... Evaluating the performance of the three NN classifiers is necessary for determining the optimal NN structure. Given the confusion matrices of the classifiers and the cost matrix ( Table5 ) estimated from the fault criticality, we can compute the misclassification costs for each classifier based on Equation 2 in Section 3. It is noted that prior to computing the misclassification costs, the cell values of the confusion matrices are normalized by dividing each cell in a row of the confusion matrix by the total number of the test samples in that row.... ..."

### TABLE 2. First five largest and smallest computed Ritz values of the Hessian matrix and the corresponding relative residuals. The Hessian is evaluated at the computed optimal point. Largest values Relative residuals Smallest values Relative residuals

### Table 4 Performance of Optimized Matrix Multiply (sec.)

1991

"... In PAGE 10: ... We also chose write-shared because it supports multiple writers and ne-grained sharing. The execution times for the unoptimized version of Matrix Multiply (see Table4 ) and SOR, for the previ- ous problem sizes and for 16 processors, are presented in Table 6. For Matrix Multiply, the use of result and read only sped up the time required to load the in- put matrices and later purge the output matrix back to the root node and resulted in a 4.... ..."

Cited by 558

### Table 4: Performance of Optimized Matrix Multiply (sec.)

1991

"... In PAGE 10: ... We also chose write-shared because it supports multiple writers and ne-grained sharing. The execution times for the unoptimized version of Matrix Multiply (see Table4 ) and SOR, for the previ- ous problem sizes and for 16 processors, are presented in Table 6. For Matrix Multiply, the use of result and Protocol Matrix Multiply SOR Multiple 72.... ..."

Cited by 558