### Table IV. The prediction accuracy of the five methods Bench- Single Model Multi-model Max.

### Table 3: Accuracy for SP with small-size inputs largest training testing input single-model single-model multi-model multi-model

2003

"... In PAGE 8: ... But these methods still require that the percentage of each model keeps unchanged for different input for each reuse distance range. Table3 shows the performance of the four methods on small size inputs of SP benchmark (We do not show the results of the multi-model method using reference histograms because it is difficult to tune). The results show that multi-model log-linear scale method is significantly more accurate than other methods.... ..."

Cited by 11

### Table 3: Accuracy for SP with small-size inputs largest training testing input single-model single-model multi-model multi-model

2003

"... In PAGE 8: ... But these methods still require that the percentage of each model keeps unchanged for different input for each reuse distance range. Table3 shows the performance of the four methods on small size inputs of SP benchmark (We do not show the results of the multi-model method using reference histograms because it is difficult to tune). The results show that multi-model log-linear scale method is significantly more accurate than other methods.... ..."

Cited by 11

### Table V. Accuracy for SP with small-size inputs largest testing single-model single-model multi-model multi-model

### Table 3 Performance of the multi-model preconditioner with a single variable V-cycle step for the di usion problem. GMRES steps for reduction of residual by 10 5 for con- stant coe cients.

"... In PAGE 13: ... Still, it seems advisable to study the precon- ditioner obtained replacing the exact solution by just a single variable V-cycle step. The results in Table3 show, that the number of iteration steps increases only moderately by this simpli cation. Since the three preconditioners, , exact MMP and V-cycle MMP require dif-... ..."

### Table 5 Performance of the multi-model preconditioner with a single variable V-cycle step for the di usion problem. GMRES steps for reduction of residual by 10 5 for non- constant coe cients. di usion problem and thus to a smaller number of iteration steps.

"... In PAGE 14: ... A value of = 10 proved su cient in our experiments. Table5 shows iteration counts for the GMRES method preconditioned by the variable V-cycle MMP for two cuto values. As expected, a smaller cuto value leads to a better approximation by the... ..."

### Table 1: The m.s.e. is the root mean square error when predicting _ v based on u and v, e is standard deviation of this error. Full set Off-equil. On-equil.

1999

"... In PAGE 5: ... Using global least squares, the predic- tion performance of the blended multi-model can be improved, mainly due to reduced bias because it is an unbiased identifica- tion algorithm, cf. Table1 . In this example we found it difficult to reduce the bias of the multiple model structure without de- creasing the overall accuracy due to increased variance.... ..."

Cited by 18

### Table 1: The m.s.e. is the root mean square error when predicting _ v based on u and v, e is standard deviation of this error. Full set Off-equil. On-equil.

1999

"... In PAGE 5: ... Using global least squares, the predic- tion performance of the blended multi-model can be improved, mainly due to reduced bias because it is an unbiased identifica- tion algorithm, cf. Table1 . In this example we found it difficult to reduce the bias of the multiple model structure without de- creasing the overall accuracy due to increased variance.... ..."

Cited by 18

### Table 1: The m.s.e. is the root mean square error when predicting _ v based on u and v, #1B e is standard deviation of this error.

1999

"... In PAGE 5: ... Using global least squares, the predic- tion performance of the blended multi-model can be improved, mainly due to reduced bias because it is an unbiased identifica- tion algorithm, cf. Table1 . In this example we found it difficult to reduce the bias of the multiple model structure without de- creasing the overall accuracy due to increased variance.... ..."

Cited by 18