### Table 1: Image reconstruction method summary show- ing the estimate error, En, for the low-rank, keyhole, and adaptive framework methods.

"... In PAGE 2: ...Table 1: Image reconstruction method summary show- ing the estimate error, En, for the low-rank, keyhole, and adaptive framework methods. Theoretically Optimal Inputs For each case given in Table1 , the minimization of En requires a subspace identification. Specifically, the min- imization of En is analogous to the determination of the right singular vectors of a matrix, [1].... In PAGE 2: ... Thus, the optimal input vectors are not realizable. However, estimates formed from input vectors implied by Table1 provide a theoretical bound on the estimate quality for a given image reconstruction method. Furthermore, the fact that An changes over the course of the sequence implies that for the best image estimate quality, the inputs Xn must change for each image as well.... ..."

### Table 1 Low-rank criteria when (I + G)s = t is solved in parallel.

1996

"... In PAGE 13: ... If n gt; p3, which is a reasonable assumption for most parallel architectures and practical problem sizes, only the second values in the formulas (16,18) with the min operator apply. Table1 summarizes the order estimates for parallel factorization of and solu- tion with I + G, and Table 2 presents results for the sequential case. Suppose that A is the matrix for a discretized PDE on a cubic domain divided into cubic subdomains, and QC (or QL + QR) is composed of the original o - diagonal blocks of A.... ..."

Cited by 2

### TABLE I Image reconstruction method summary showing the estimate, b An, and the estimate error, En, for the low-rank, keyhole, and adaptive framework methods.

### Table 4: Number of GMRES(5) iterations varying the threshold for a low-rank update of dimension 5 for the matrix ORSIRR1.

2002

"... In PAGE 7: ... Once this eigencomponent is removed by the rank-one update preconditioner both GMRES(40) and BiCGStab converge. To illustrate that the proposed updates should be used to improve an already e ective precon- ditioner, we report in Table4 the number of iterations when the threshold of ILU(t) is relaxed making the original preconditioner less and less e cient. We see that, in that case, the update will only improve the convergence up to a certain level above which it does not have any e ect.... ..."

Cited by 10

### Table 3.4: Number of GMRES(5) iterations varying the threshold for a low-rank update of dimension 5 for the matrix ORSIRR1.

2002

Cited by 10

### Table 4: Number of GMRES(5) iterations varying the threshold for a low-rank update of dimension 5 for the matrix ORSIRR1.

"... In PAGE 7: ... Once this eigencomponent is removed by the rank-one update preconditioner both GMRES(40) and BiCGStab converge. To illustrate that the proposed updates should be used to improve an already effective precon- ditioner, we report in Table4 the number of iterations when the threshold of ILU(t) is relaxed making the original preconditioner less and less efficient. We see that, in that case, the update will only improve the convergence up to a certain level above which it does not have any effect.... ..."

### Table 4. Low Rank Approximation by Partition

2006

Cited by 5

### Table 2: Inexact Newton method. AT KA is a low-rank downdate of AT A.

2000

"... In PAGE 17: ... Outer will refer to inexact Newton (IRLS) iterations and inner to PCGLS iterations. The maximum number of down- dates is set to 20 for all the results in Table2 , 3, 4, and 5. Thus the actual downdates q is less than or equal to 20.... In PAGE 18: ...3. in Table2 for the Talwar function show that we terminate by the maximum number of PCGLS iterations allowed, t = 40, at all iterations, for stocfor2 when AT A and AT KA are used as preconditioners, and for maros when AT A is used as a preconditioner. This suggests that we should increase q and or t so that the linear system can be solved to high accuracy.... In PAGE 18: ... This suggests that we should increase q and or t so that the linear system can be solved to high accuracy. The results in Table2 and 3 suggest that the Fair function is performing better than other functions on inexact Newton method. The results in Table 4 and 5 show that on average the Fair function is doing better than other functions on IRLS method.... In PAGE 18: ... The Logistic function is doing almost as good as the Fair function on IRLS method. Comparing the results in Table2 and 3 with the results in Table 4 and 5 we see that Newton method converges faster than IRLS method, and the low-rank downdates do not lead to a signi cant decrease in the inexact Newton (IRLS) iterations carried out (outer). Thus... ..."

Cited by 4

### Table 2: Inexact Newton method. AT KA is a low-rank downdate of AT A.

"... In PAGE 15: ... Outer will refer to inexact Newton (IRLS) iterations and inner to PCGLS iterations. The maximum number of downdates is set to 20 for all the results in Table2 , 3, 4, and 5. Thus the actual downdates q is less than or equal to 20.... In PAGE 15: ...n Theorem 3.2. We will base our analysis on the values of outer and inner. The results in Table2 for the Talwar function show that we terminate by the maximum number of PCGLS iterations allowed, t = 40, for stocfor2 when AT A and AT KA are used as preconditioners, and for maros when AT A is used as a preconditioner. This suggests that we should increase q and or t.... In PAGE 15: ... This suggests that we should increase q and or t. The results in Table2 and 3 suggest that the Fair function is performing better than other functions on inexact Newton method. The results in Table 4 and 5 show that on average the Fair function is doing better than other functions on IRLS method.... In PAGE 15: ... The Logistic function is doing almost as good as the Fair function on IRLS method. Comparing the results in Table2 and 3 with the results in Table 4 and 5 we see that Newton method converges faster than IRLS method, and the low-rank downdates do not lead to a significant decrease in the inexact Newton (IRLS) iterations carried out (outer). Thus it is worthwhile to... ..."

### Table 3: Number of iterations varying the dimension of the low-rank update with W = V quot;.

2002

"... In PAGE 6: ... For the experiments shown in Table 2, we use a left preconditioner and the formulation described in Proposition 1 that is W H = UH quot; M1. Similar results are displayed in Table3 using the formulation described in Proposition 2 that is with W = V quot;. In this latter case, the cost for the eigencomputation to setup the update is halved because only right eigenvectors need to be computed.... ..."

Cited by 10