### Table 2: Comparison of factored and norm-minimization sparse approximate inverse.

1998

"... In PAGE 15: ...3 of [21]). The test results are reported in Table2 . For convenience, we also copied the iteration counts and the corresponding sparsity ratios from Table 5.... In PAGE 15: ... For the same reason, we refrain ourself from declaring which one of them is better in general. However, the results of Table2 show that the factored sparse approximate inverse preconditioner performed comparably to the norm minimization based sparse approximate inverse preconditioner. The matrices that were di cult to solve by the factored sparse approximate inverse, e.... ..."

Cited by 15

### Table 3 Number of nonzero in approximate inverse

1997

"... In PAGE 9: ... This is exceptional because of the special near tridiagonal structure of A. Table3 shows the number of nonzeros for each preconditioner. The wavelet based preconditioner requires much less amount of memory than SPAI does.... ..."

Cited by 28

### Table 1: Relation between the sparsity patterns of the coefficient matrices and the approximate inverses. We use A and M to represent the set of the positions of the nonzeros in the corresponding matrices.

2004

"... In PAGE 5: ... Therefore, the partitions on the coefficient matrices are expected to be effective for the preconditioners. To justify this reasoning, we show the relation between the sparsity patterns of the coefficient matrices and the approximate inverses in Table1 . As seen in the table, the relation be- tween the sparsity patterns of the coefficient and preconditioner matrices varies; 63% of the nonzeros of Zhao1-M are covered by the nonzeros of Zhao1-A, and only 18% of the nonzeros of mark3jac060-M are covered by the nonzeros of mark3jac060-A.... In PAGE 6: ... The maximum percent improvements are obtained for the mark3jac060 matrices in all cases. As seen in Table1 , the Zhao1 matrices have the highest number of common nonzeros, and the mark3jac060 matrices have the least number of common nonzeros. Although xenon1-A covers 61% of xenon1-M (second maximum), the percent improvements achieved for these matrices are quite satisfactory.... In PAGE 7: ... In order to show how the improvements obtained by the proposed method relate to parallel running times, we give the average communication patterns of the partitionings in Tables 13 and 14. As seen from Table1 1, the CR partitioning gives better speedup values than the RC partitioning for all matrices. On the average, CR obtains speedup values of 6.... In PAGE 18: ...Table1 0: Communication patterns for 64-way CC and RR composite and in- dividual hypergraph partitionings for SPAI-matrices witn single constraint. Individual partitioning Simultaneous partitioning Volume Message Volume Message Percent Matrix tot max tot max tot max tot max Gain CC Zhao1-A 11460 244 14.... In PAGE 19: ...Table1 1: Speedups for the BiCGStab method with SPAI-matrices. CR RC Matrix K Time Speedup Time Speedup Zhao1 1 113 1.... In PAGE 20: ...Table1 2: Speedups for the BiCGStab method with AINV-matrices. CRC RCR Matrix K Time Speedup Time Speedup Zhao1 1 133 1.... In PAGE 21: ...Table1 3: Communication patterns for 8- and 16-way CR simultaneous and individual partitionings for SPAI-matrices. Simultaneous partitioning Individual partitioning CR C/R Volume Message Volume Message Reorder Matrix tot max tot max tot max tot max Volume K = 8 Zhao1-A 4098 746 32.... In PAGE 23: ...Table1 5: Communication patterns for 8- and 16-way RCR simultaneous and individual partitionings for AINV-matrices. Simultaneous partitioning Individual partitioning RCR R/C/R Volume Message Volume Message Reorder Matrix tot max tot max tot max tot max Volume K = 8 Zhao1-A 4325 771 29.... In PAGE 24: ...Table1 6: Communication patterns for 8- and 16-way CRC simultaneous and individual partitionings for AINV-matrices. Simultaneous partitioning Individual partitioning RCR R/C/R Volume Message Volume Message Reorder Matrix tot max tot max tot max tot max Volume K = 8 Zhao1-A 5395 910 36.... In PAGE 25: ...Table1 7: Average CR partitioning times for the SPAI-matrices in seconds. K Matrix Individual partitioning Simultaneous partitioning Ratio A M 8 Zhao 2.... In PAGE 26: ...Table1 8: Average RC partitioning times for the SPAI-matrices in seconds. K Matrix Individual partitioning Simultaneous partitioning Ratio A M 8 Zhao 2.... In PAGE 27: ...Table1 9: Average CRC partitioning times for the AINV-matrices in seconds. K Matrix Individual partitioning Simultaneous partitioning Ratio A Z W 8 Zhao 2.... ..."

### TABLE I APPROXIMATE INVERSES AND OVERALL TRANSFER FUNCTIONS FOR EACH APPROXIMATE INVERSE METHOD.

### TABLE II APPROXIMATE INVERSES AND OVERALL TRANSFER FUNCTIONS FOR THE FIRST-ORDER EXAMPLE FOR EACH APPROXIMATE INVERSE METHOD.

### Table 1: Relative mean time between quot;data inaccessible quot; failures a second failure (after a failure in a speci c disk) would cause unavailability of data. Note that if the number of disks is very large this assumption may not hold and we may have to consider multiple failures with more than 2 failed disks, but in general this is a reasonable approximation. Table 1 shows the approximate mean time between quot;data inaccessible quot; failures relative to a constant K that can be computed as function of the mean time to failure of a disk and the mean time to repair a disk and is the same for all of the systems. The time to data inaccessibility is inversely proportional to the number of disks for which a second failure would cause inaccessibility of some portion of data. 2.3 Decentralized disk scheduling Figure 6 illustrates the major factor that we have to deal with in the clustered version of the object server. The number of clients is expected to be large (e.g., hundreds) and the number of storage nodes to be up to a few tens. Clients are not expected to directly access storage nodes for several reasons. One is that they should not have to be aware of the details of the replication and load balancing. Also there are potentially 11

1998

"... In PAGE 10: ... The overappling clusters also has somewhat better probability of surviving a second disk failure without loss of data availability. 1 For reference Table1 shows the relative mean time between quot;data inaccessible quot; failures (failures where a fraction of data is unaccessible) for all cases. We assume that the mean repair time of disks is much smaller than the mean time between disk failures in the system.... ..."

Cited by 4

### Table 1: Relative mean time between quot;data inaccessible quot; failures than 2 failed disks, but in general this is a reasonable approximation. Table 1 shows the approximate mean time between quot;data inaccessible quot; failures relative to a constant K that can be computed as function of the mean time to failure of a disk and the mean time to repair a disk and is the same for all of the systems. The time to data inaccessibility is inversely proportional to the number of disks for which a second failure would cause inaccessibility of some portion of data. 2.2 Decentralized disk scheduling Figure 6 illustrates the major factor that we have to deal with in the clustered version of the object server. The number of clients is expected to be large (e.g., hundreds) and the number of storage nodes to be up to a few tens. Clients are not expected to directly access storage nodes for several reasons. One is that they should not have to be aware of the details of the replication and load balancing. Also there are potentially so many of them that it would be costly to keep them up to date on the load on the storage nodes, let alone the disks. Our proposed solution is to have a front end set of nodes that interface to clients. While these 11

1998

"... In PAGE 10: ... The overappling clusters also has somewhat better probability of surviving a second disk failure without loss of data availability. 1 For reference Table1 shows the relative mean time between quot;data inaccessible quot; failures (failures where a fraction of data is unaccessible) for all cases. We assume that the mean repair time of disks is much smaller than the mean time between disk failures in the system.... ..."

Cited by 4

### Table 8: WEST0067, Approximate pseudo-inverse, no dropping.

1994

"... In PAGE 31: ... One inner iteration was used, with a scaled identity initial guess to approximate the inverse of ATA + I. Table8 shows the results with no dropping. As clearly seen, the ultimate quality of the preconditioner decreases as increases.... ..."

Cited by 26

### Table 2: Results for approximate inverse preconditioners SPAI and AINV.

1998

"... In PAGE 5: ...015 0.964 In Table2 we present the results of test runs with Bi-CGSTAB preconditioned with the approximate inverse preconditioners SPAI and AINV. For each of the two methods we give the number of iterations for convergence (Its), the set-up time for the preconditioner (P-time), the time for the preconditioned iterations (It-time).... ..."

Cited by 14