### Table 1 Low-rank criteria (in the case of parallel solving of (I + G)s = t).

1995

"... In PAGE 8: ... Restrictions on m that are necessary to satisfy criterion (b) can be derived from the estimates of tG given earlier in this section. These results for parallel factoring of I + G and parallel (I + G)-solve are sum- marized in Table1 . Table 2 presents analogously obtained results for sequential LU- factoring and (I + G)-solve.... ..."

Cited by 1

### Table 1 Low-rank criteria when (I + G)s = t is solved in parallel.

1996

"... In PAGE 13: ... If n gt; p3, which is a reasonable assumption for most parallel architectures and practical problem sizes, only the second values in the formulas (16,18) with the min operator apply. Table1 summarizes the order estimates for parallel factorization of and solu- tion with I + G, and Table 2 presents results for the sequential case. Suppose that A is the matrix for a discretized PDE on a cubic domain divided into cubic subdomains, and QC (or QL + QR) is composed of the original o - diagonal blocks of A.... ..."

Cited by 2

### Table 2 Low-rank criteria #28in the case of sequential solving of #28I + G#29s = t#29.

1996

"... In PAGE 32: ... These results for parallel factoring of I+G and parallel #28I+G#29-solve are summarized in Table 1. Table2 presents analogously obtained results for sequential LU-factoring and #28I + G#29-solve. If the symbol o in the restrictions on m #2857,58#29 is replaced with O, then instead of #2853#29 we will have t B = O#28t D + t m #29: #2859#29... ..."

Cited by 5

### Table 1 Low-rank criteria #28in the caseofparallel solving of #28I + G#29s = t#29.

1996

"... In PAGE 32: ...Table 1 Low-rank criteria #28in the caseofparallel solving of #28I + G#29s = t#29. These results for parallel factoring of I+G and parallel #28I+G#29-solve are summarized in Table1 . Table 2 presents analogously obtained results for sequential LU-factoring and #28I + G#29-solve.... ..."

Cited by 5

### Table 1: Image reconstruction method summary show- ing the estimate error, En, for the low-rank, keyhole, and adaptive framework methods.

"... In PAGE 2: ...Table 1: Image reconstruction method summary show- ing the estimate error, En, for the low-rank, keyhole, and adaptive framework methods. Theoretically Optimal Inputs For each case given in Table1 , the minimization of En requires a subspace identification. Specifically, the min- imization of En is analogous to the determination of the right singular vectors of a matrix, [1].... In PAGE 2: ... Thus, the optimal input vectors are not realizable. However, estimates formed from input vectors implied by Table1 provide a theoretical bound on the estimate quality for a given image reconstruction method. Furthermore, the fact that An changes over the course of the sequence implies that for the best image estimate quality, the inputs Xn must change for each image as well.... ..."

### Table 4. Low Rank Approximation by Partition

2006

Cited by 5

### Table 2: Inexact Newton method. AT KA is a low-rank downdate of AT A.

2000

"... In PAGE 17: ... Outer will refer to inexact Newton (IRLS) iterations and inner to PCGLS iterations. The maximum number of down- dates is set to 20 for all the results in Table2 , 3, 4, and 5. Thus the actual downdates q is less than or equal to 20.... In PAGE 18: ...3. in Table2 for the Talwar function show that we terminate by the maximum number of PCGLS iterations allowed, t = 40, at all iterations, for stocfor2 when AT A and AT KA are used as preconditioners, and for maros when AT A is used as a preconditioner. This suggests that we should increase q and or t so that the linear system can be solved to high accuracy.... In PAGE 18: ... This suggests that we should increase q and or t so that the linear system can be solved to high accuracy. The results in Table2 and 3 suggest that the Fair function is performing better than other functions on inexact Newton method. The results in Table 4 and 5 show that on average the Fair function is doing better than other functions on IRLS method.... In PAGE 18: ... The Logistic function is doing almost as good as the Fair function on IRLS method. Comparing the results in Table2 and 3 with the results in Table 4 and 5 we see that Newton method converges faster than IRLS method, and the low-rank downdates do not lead to a signi cant decrease in the inexact Newton (IRLS) iterations carried out (outer). Thus... ..."

Cited by 4

### Table 2: Inexact Newton method. AT KA is a low-rank downdate of AT A.

"... In PAGE 15: ... Outer will refer to inexact Newton (IRLS) iterations and inner to PCGLS iterations. The maximum number of downdates is set to 20 for all the results in Table2 , 3, 4, and 5. Thus the actual downdates q is less than or equal to 20.... In PAGE 15: ...n Theorem 3.2. We will base our analysis on the values of outer and inner. The results in Table2 for the Talwar function show that we terminate by the maximum number of PCGLS iterations allowed, t = 40, for stocfor2 when AT A and AT KA are used as preconditioners, and for maros when AT A is used as a preconditioner. This suggests that we should increase q and or t.... In PAGE 15: ... This suggests that we should increase q and or t. The results in Table2 and 3 suggest that the Fair function is performing better than other functions on inexact Newton method. The results in Table 4 and 5 show that on average the Fair function is doing better than other functions on IRLS method.... In PAGE 15: ... The Logistic function is doing almost as good as the Fair function on IRLS method. Comparing the results in Table2 and 3 with the results in Table 4 and 5 we see that Newton method converges faster than IRLS method, and the low-rank downdates do not lead to a significant decrease in the inexact Newton (IRLS) iterations carried out (outer). Thus it is worthwhile to... ..."

### Table 3: Number of iterations varying the dimension of the low-rank update with W = V quot;.

2002

"... In PAGE 6: ... For the experiments shown in Table 2, we use a left preconditioner and the formulation described in Proposition 1 that is W H = UH quot; M1. Similar results are displayed in Table3 using the formulation described in Proposition 2 that is with W = V quot;. In this latter case, the cost for the eigencomputation to setup the update is halved because only right eigenvectors need to be computed.... ..."

Cited by 10

### Table 6: Number of CG iterations varying the dimension of the low-rank update.

2002

"... In PAGE 9: ...pdate presented in Proposition 5. As a preconditioner we use IC(t) [17]. We observe a similar improvement for SPD linear systems to what was seen in the previous section. This is illustrated in Table6 where we show the number of CG iterations as we vary the dimension of the positive semi-de nite update. To show that the improvement of the update is not too closely related to the quality of the initial preconditioner we show, for BCSSTK27 and S1RMQ4M1, the number of iterations for two di erent thresholds for IC.... ..."

Cited by 10