### Table 5.5: OP Count of QR Algorithm, n x n Matrix, Householder Vector of Length k.

### Table 7: Number of nonzero elements in the representation of Q qr sqr sqr(L) Matrix Q

1999

"... In PAGE 24: ... This means that the non unique part, Q2, the last m ? n columns of Q, is more dense from qr than from sqr. The factor in Table7 is de ned as follows; = nnz(HF ) nnz(R)(1 + 2(m?n) n ): The factor is close to to one (one half of the diagonal is neglected) when storing the orthogonal factor by the Householder vectors of a dense m n matrix. This suggests that when is less than one the multifrontal method has stored the orthogonal factor in a \more quot; e cient way than if the matrix was full.... ..."

Cited by 3

### Table 5: Model of the SDC algorithm with Newton iteration 20 matrix QR 2 Householder Total inversions applications

"... In PAGE 11: ... Each of these building block models were validated against the performance data shown in Figures 2 and 4. In Table5 , the predicted running time of each of the steps of the algorithm is displayed. Summing the times in Table 5 yields: Total time = 45n3 p + (160 + 23 lg p)n + (90 + 40 lg p) n2 pp : (7) Using the measured machine parameters given in Table 6 with equation (7) yields the predicted times on CM-5 (Table 3) and the Intel Delta system (Table 7).... In PAGE 11: ... In Table 5, the predicted running time of each of the steps of the algorithm is displayed. Summing the times in Table5 yields: Total time = 45n3 p + (160 + 23 lg p)n + (90 + 40 lg p) n2 pp : (7) Using the measured machine parameters given in Table 6 with equation (7) yields the predicted times on CM-5 (Table 3) and the Intel Delta system (Table 7). As Table 7 shows, our model underestimates the actual time on the Delta by no more than 30% for the problem and machine sizes listed.... In PAGE 13: ...1The BLACS use protocol 2, and the communication pattern most closely resembles the \shift quot; timings. 2 is from Table 8 and is from Table5... ..."

### Table 1: Comparison of p-QR, p-Kmeans, and K-means for two-way clustering Newsgroups p-QR p-Kmeans K-means

2001

"... In PAGE 6: ...xample 1. In this example, we look at binary clustering. We choose 50 random document vectors each from two newsgroups. We tested 100 runs for each pair of newsgroups, and list the means and standard deviations in Table1 . The two clustering algorithms p-QR and p-Kmeans are comparable to each other, and both... ..."

Cited by 63

### Table 1: Comparison of p-QR, p-Kmeans, and K-means for two-way clustering Newsgroups p-QR p-Kmeans K-means

2001

"... In PAGE 7: ...xample 1. In this example, we look at binary clustering. We choose 50 random document vectors each from two newsgroups. We tested 100 runs for each pair of newsgroups, and list the means and standard deviations in Table1 . The two clustering algorithms p-QR and p-Kmeans are comparable to each other, and both are better and sometimes substantially better than K-means.... ..."

Cited by 63

### Table 1: Average Reconstruction Errors

1996

"... In PAGE 12: ... The RMSE captures the performance of each reconstruction in a single number. Since the RMSE values in Table1 have the same units as the entries in corresponding rows in Table 2, we can think of RMSE as an average deviation from the truth in the same physical units as the eld being measured. Overall, the RMSE apos;s are less than 18% of the average magnitude of their corresponding true elds and less than 9% of the maximum magnitude.... In PAGE 12: ... Percent magnitude error or angular deviation is another indicator of the performance in reconstructing vector elds; these measures have the advantage of separating out quantities which may be of particular interest in a given application. From Table1 we see that all eld reconstructions in this simulation | ^ e, ^ b, and ^ q | have vectors that are on average too short; interestingly, reconstructing from the total eld tended to reduce this error. The angular error in these same reconstructions, however, shows the exact reverse e ect | i.... In PAGE 13: ... Thus, except for this anomalous region, we can conclude that performance is improved by taking more angles and moving the imaging planes closer together. In particular, for the M and T pair closest to that of our previous full reconstruction simulation (M = 54 and T = 0:08), the particular error here is 2:22 10?2, about three times smaller than RMSE of 6:574 10?2 in Table1 . From Table 3, we see that it is possible to reduce this particular error by almost a factor of 10 (to 0:24 10?2 when M = 3221 and T = 0:04), which by analogy would reduce the average error to below 2% of the average eld value.... ..."

Cited by 9

### Table 3: Pseudorandom Test Length for the Murphy Chip CUT Name Inputs Exhaustive Super Exhaustiv

"... In PAGE 4: ... The polynomial is x24+x7+x2+ x+1. Table3 lists the length of the pseudo-random patterns. The test is exhaustive for the four 24-input CUT designs and is super-exhaustive for 6SQ, which has only 12 inputs.... ..."

### Table 5: QR crossover points.

1996

"... In PAGE 27: ... As another test of the accuracy of the model, we computed crossover points for the row- wrap algorithm on both the SP2 and the Paragon from the model and compared them to points interpolated from the data shown in Figures 19 and 20. The results are summarized in Table5 . Despite its simplicity, the model does provide a good basis for estimating the crossover point for this problem.... ..."

Cited by 4

### Table 2: Numerical tests on QR

"... In PAGE 10: ... In Figure 1 to 4, the perturbed datum is the matrix A. We can seen that the condition number estimation follows exactly the theoretical results gathered in Table2 and Table 3. The estimated condition numbers in Figures 1 and 3 are about the square of that read from Figures 2 and 4.... ..."

### Table 1: Comparison of p-QR, p-Kmeans, and K-means for two-way clustering

in Abstract

"... In PAGE 7: ...xample 1. In this example, we look at binary clustering. Wechoose 50 random document vectors each from two newsgroups. We tested 100 runs for each pair of newsgroups, and list the means and standard deviations in Table1 . The two clustering algorithms p-QR and p-Kmeans are comparable to each other, and both are better and sometimes substantially better than K-means.... ..."