### Table 2: Simple SANs

"... In PAGE 15: ...Table 2: Simple SANs 4.1 Simple SANs Let us rst analyse the part of Table2 showing the comparative cost of each part of the code, according to the division give in Figure 3. Note that the cost of the dot product with the diagonal is small (less than 10%) and this will be always the case for all experiments.... ..."

Cited by 9

### Table 3: Case Studies #7B Diagonally Grooved Bearing

"... In PAGE 13: ... The additional #0Dow Q p through the ungrooved portion #28over the pad#29 is given approximately by Keller #281985#29 as Q p = #19Rc 3 #01p 6#16d : #286.1#29 Evaluation of nine case studies shown in Table3 indicates that the predicted total #0Dows calculated from the #0Cnite element computations added to that of #286.1#29 are in good agreement with measured #0Dows over a wide range of operating journal speeds and supply pressures.... In PAGE 13: ... The discrepancies between the experiments and predictions are obviously a#0Bected by measurement uncertainties in #0Duid viscosity and supply pressure in addition to the groove cross-sectional geometry,aswell as the neglected inertial and entrance e#0Bects in the model. In our test setup, supply pressures over the range given in Table3 have been measured to... In PAGE 16: ... The Muijderman results both over- and under-predict the #0Dow rate with atypical discrepancy of 24#25 for the cases considered as shown in Table 4. For these error comparisons, the same cases are considered as in Table3 and the simple prediction and Muijderman predictions have the same computed pad #0DowasinTable 3 before comparing to experiments. 7 Concluding Remarks Muijderman apos;s results do not adequately account for the groove aspect ratio and do not have the proper groove angle dependence.... In PAGE 16: ... The Muijderman results both over- and under-predict the #0Dow rate with atypical discrepancy of 24#25 for the cases considered as shown in Table 4. For these error comparisons, the same cases are considered as in Table 3 and the simple prediction and Muijderman predictions have the same computed pad #0Dowasin Table3 before comparing to experiments. 7 Concluding Remarks Muijderman apos;s results do not adequately account for the groove aspect ratio and do not have the proper groove angle dependence.... ..."

### Table 4: Estimated speed-ups for the symmetric solver. example combining nested dissection and minimum degree. We are experimenting with such reorderings and plan to incorporate some code for this, developed from the RALPAR partitioning package [17], within our analysis phase. This is also the topic of a collaboration with Roman and Pellegrini (LaBRI, Bordeaux) and will not be addressed further in this paper. However, we see that a signi cant speed-up increase is provided by parallelism of types 2 and 3. Note that our model is very simple and therefore optimistic. It is also interesting to notice that type 2 parallelism is better in the symmetric case than in the unsymmetric case. This is due to the fact that, in the symmetric case, the master process is only in charge of the diagonal block of fully summed variables whereas in the unsymmetric case the master also computes the o -diagonal block (block of U factors) of the frontal matrix.

2000

Cited by 67

### Table 4: Diagonally Grooved Bearing #7B Simpli#0Ced Analysis

"... In PAGE 16: ...4 for the shaft velocitytypically over-predicts the #0Dow rate by 17 percent. The Muijderman results both over- and under-predict the #0Dow rate with atypical discrepancy of 24#25 for the cases considered as shown in Table4 . For these error comparisons, the same cases are considered as in Table 3 and the simple prediction and Muijderman predictions have the same computed pad #0DowasinTable 3 before comparing to experiments.... ..."

### Table 1 Convergence of GMRES(20) where A=2D Laplacian. For PDE problems, su cient decay of inverse entries does not necessarily happen. More likely to occur, however, is piecewise smoothness of the entries in the rows and columns of the matrix. The key observation is that if A corresponds to a di erential operator of some elliptic PDE, the inverse would be the corresponding discrete Green apos;s function. (See appendix for a simple example illustrating this idea.) Similar observation in the case of solving integral equation can be found in [6] and in the case of hyperbolic and parabolic PDE can be found in [18]. Since A?1 corresponds to the Green apos;s function, we would expect piecewise smooth changes in the inverse entries, with singularity along the diagonal. In other words, if we treat the inverse matrix as a graph of a function of two variables, then we will 3

1997

"... In PAGE 3: ...e also remark that even if A?1 has decay away from the diagonal (e.g. A comes from the Laplace operator), the rate of decay may not be enough for the approximate inverse to have optimal convergence, in the sense that the number of iteration for convergence is independent of mesh size. This is veri ed numerically in Table1 , where SPAI is the sparse approximate inverse given by Grote and Huckle apos;s implementation [21]. The number in the bracket is the maximum allowable size of the residual norm of each column.... ..."

Cited by 28

### TABLE I STEPS TO OBTAIN THE DIAGONAL MIMO SYSTEM IN CASE OF CSI AT THE TRANSMITTER.

2004

Cited by 3

### Table 1: Steps to obtain the diagonal MIMO system in case of CSI at the transmitter.

### Table 1: Steps to obtain the diagonal MIMO system in case of CSI at the transmitter.

### Table 2. Number of function/gradient evaluations when D is given by (19). The matrix ^ D given by (19) has only 12 distinct eigenvalues, and since the inner CG cycle is allowed to perform 20 steps it will completely solve the Newton equations (1) when = 0. Therefore this case may be too simple. In the third experiment we alter D further so that the inner CG iteration is not able to solve the Newton equations. We leave the 5 smallest and 6 largest eigenvalues as in (19) but now split the eigenvalue of 1 into 89 eigenvalues which are contained in the interval [0:6; 9:4]. The new diagonal matrix, which we denote by D is given by

"... In PAGE 10: ... Thus, if the new diagonal matrix is denoted by ^ D = diag( ^ d1; :::; ^ dn) we have ^ di = di i = 1; :::; 5 ^ di = 1 i = 6; :::; 94 (19) ^ di = di i = 95; :::; 100; where D = diag(d1; :::; dn) is de ned by (16). The results are given in Table2 and are strikingly di erent. We have omitted the results for = 0, since they are the same as in Table 1.... In PAGE 12: ...condition number of 125) L-BFGS and DINEMO are comparable. This is in contrast to Table2 where DINEMO had a clear advantage in this case. Thus collecting incom- plete information during the inner CG iteration is not as bene cial for the new method DINEMO.... ..."

### Table 2. Number of function/gradient evaluations when D is given by (4.4). The matrix ^ D given by (4.4) has only 12 distinct eigenvalues, and since the inner CG cycle is allowed to perform 20 steps it will completely solve the Newton equations (1.1) when = 0. Therefore this case may be too simple. In the third experiment we alter D further so that the inner CG iteration is not able to solve the Newton equations. We leave the 5 smallest and 6 largest eigenvalues as in (4.4) but now split the eigenvalue of 1 into 89 eigenvalues which are contained in the interval [0:6; 9:4]. The new diagonal matrix, which we denote by D is given by

"... In PAGE 10: ...1). The results are given in Table2 and are strikingly di erent. We have omitted the results for = 0, since they are the same as in Table 1.... In PAGE 12: ...ndicates that only the stopping test (4.3) was met. Table 3 shows that when D is well conditioned ( = :05, which corresponds to a condition number of 125) L-BFGS and DINEMO are comparable. This is in contrast to Table2 where DINEMO had a clear advantage in this case. Thus collecting incomplete information during the inner CG iteration is not as bene cial for the new method DINEMO.... ..."