### Table 3: Minimum, maximum, L1 and relative execution time for the solution of the one-dimensional advective di usion equation of a step pro le with P e = 50 using various schemes.

"... In PAGE 16: ... Therefore, only the L1 norm is provided as a measure of the performance of a numerical scheme when P e 6 = 1. These have been provided in Table 2 for P e = 5 and Table3 for P e = 50. Although most of the schemes produced results that were indistinguishable from the analytical solution for P e = 5, this was not the situation for the solution of the advective di usion equation for P e = 50.... ..."

### Table 2: Minimum, maximum, L1 and relative execution time for various schemes for the solution of the one-dimensional advective di usion equation of a step pro le with P e = 5.

"... In PAGE 16: ... Therefore, only the L1 norm is provided as a measure of the performance of a numerical scheme when P e 6 = 1. These have been provided in Table2 for P e = 5 and Table 3 for P e = 50. Although most of the schemes produced results that were indistinguishable from the analytical solution for P e = 5, this was not the situation for the solution of the advective di usion equation for P e = 50.... In PAGE 16: ... Some of these schemes introduce non-physical results. This is evident from the L1 norm given in Table2 and Table 3. When P e = 5 the concentration pro le is relatively smooth.... ..."

### Table 1: Minimum, maximum, L1 and relative execution time for various schemes for the solution of the one-dimensional advective-di usion equation of a step pro le with P e = 1.

"... In PAGE 12: ... The minimum cmin, maximum cmax and L1 norm, de ned by (14), was evalu- ated for all the numerical schemes used in the rst example. These are given in Table1 along with the relative computational time required by each scheme. Only numerical schemes which use some form of ux or slope limiter avoid the generation of spurious extrema.... ..."

### Table 7 Average number of CG iterations on the ne grid for the oscillatory coe cient problem. = 0:1; 0:01. Example 5: We show by a one-dimensional Helmholtz equation that the energy minimization principle is not restricted to positive de nite second order elliptic PDEs. The model equation is u + u = 1; (24)where is a positive constant. This operator is inde nite. We use multigrid to solve the linear system Ah. For this problem, we obtained H i from solving the local PDEs (7), not from the minimization problem (12), since constant functions are not in the 16

1998

Cited by 28

### Table 1: Mean (over 50 data samples) and 95% confidence interval for standardized MSE for the five methods on the three test functions with one-dimensional input. Method Function 1 Function 2 Function 3

2004

"... In PAGE 6: ... For the non-Bayesian neural network, 10, 50, and 3 hidden units were optimal for the three datasets, respectively. Table1 shows that the nonstationary GP does as well or better than the stationary GP, but that BARS does as well or better than the other methods on all three datasets with one input. Part of the difficulty for the nonstationary GP with the third function, which has the sharp jump, is that our parameterization forces smoothly-varying kernel matrices, which prevents our particular implementation from picking up sharp jumps.... ..."

Cited by 4

### Table 1: Mean (over 50 data samples) and 95% confidence interval for standardized MSE for the five methods on the three test functions with one-dimensional input. Method Function 1 Function 2 Function 3

2004

"... In PAGE 6: ... For the non-Bayesian neural network, 10, 50, and 3 hidden units were optimal for the three datasets, respectively. Table1 shows that the nonstationary GP does as well or better than the stationary GP, but that BARS does as well or better than the other methods on all three datasets with one input. Part of the difficulty for the nonstationary GP with the third function, which has the sharp jump, is that our parameterization forces smoothly-varying kernel matrices, which prevents our particular implementation from picking up sharp jumps.... ..."

Cited by 4

### Table 1: Mean (over 50 data samples) and 95% confidence interval for standardized MSE for the five methods on the three test functions with one-dimensional input. Method Function 1 Function 2 Function 3

2004

"... In PAGE 6: ... For the non-Bayesian neural network, 10, 50, and 3 hidden units were optimal for the three datasets, respectively. Table1 shows that the nonstationary GP does as well or better than the stationary GP, but that BARS does as well or better than the other methods on all three datasets with one input. Part of the difficulty for the nonstationary GP with the third function, which has the sharp jump, is that our parameterization forces smoothly-varying kernel matrices, which prevents our particular implementation from picking up sharp jumps.... ..."

Cited by 4

### Table 2. Error rates for the quadratic and linear classifiers in the one-dimensional space, where the transformed data has been obtained using the FDA, LD and RH methods.

"... In PAGE 6: ... For the linear classifier, again, LD and RH outperformed FDA, and also RH achieved the lowest error rate in six out of ten datasets, outperforming LD. In Table2 , the results for the dimensionality reduction and classification for dimen- sion d = 1 are shown. For the quadratic classifier, we observe that as in the previous case, LD and RH outperformed FDA, and that the latter did not obtained the lowest er- ror rate in any of the datasets.... ..."

### Table 2: Accuracy results for the one dimensional Laplace equation with ghost cells deflned by constant extrapolation.

Cited by 5

### Table 3: Accuracy results for the one dimensional Laplace equation with ghost cells deflned by linear extrapolation.

Cited by 5