### Table 3: Minimum, maximum, L1 and relative execution time for the solution of the one-dimensional advective di usion equation of a step pro le with P e = 50 using various schemes.

"... In PAGE 16: ... Therefore, only the L1 norm is provided as a measure of the performance of a numerical scheme when P e 6 = 1. These have been provided in Table 2 for P e = 5 and Table3 for P e = 50. Although most of the schemes produced results that were indistinguishable from the analytical solution for P e = 5, this was not the situation for the solution of the advective di usion equation for P e = 50.... ..."

### Table 2: Minimum, maximum, L1 and relative execution time for various schemes for the solution of the one-dimensional advective di usion equation of a step pro le with P e = 5.

"... In PAGE 16: ... Therefore, only the L1 norm is provided as a measure of the performance of a numerical scheme when P e 6 = 1. These have been provided in Table2 for P e = 5 and Table 3 for P e = 50. Although most of the schemes produced results that were indistinguishable from the analytical solution for P e = 5, this was not the situation for the solution of the advective di usion equation for P e = 50.... In PAGE 16: ... Some of these schemes introduce non-physical results. This is evident from the L1 norm given in Table2 and Table 3. When P e = 5 the concentration pro le is relatively smooth.... ..."

### Table 1: Minimum, maximum, L1 and relative execution time for various schemes for the solution of the one-dimensional advective-di usion equation of a step pro le with P e = 1.

"... In PAGE 12: ... The minimum cmin, maximum cmax and L1 norm, de ned by (14), was evalu- ated for all the numerical schemes used in the rst example. These are given in Table1 along with the relative computational time required by each scheme. Only numerical schemes which use some form of ux or slope limiter avoid the generation of spurious extrema.... ..."

### Table 1: Mean (over 50 data samples) and 95% confidence interval for standardized MSE for the five methods on the three test functions with one-dimensional input. Method Function 1 Function 2 Function 3

2004

"... In PAGE 6: ... For the non-Bayesian neural network, 10, 50, and 3 hidden units were optimal for the three datasets, respectively. Table1 shows that the nonstationary GP does as well or better than the stationary GP, but that BARS does as well or better than the other methods on all three datasets with one input. Part of the difficulty for the nonstationary GP with the third function, which has the sharp jump, is that our parameterization forces smoothly-varying kernel matrices, which prevents our particular implementation from picking up sharp jumps.... ..."

Cited by 4

### Table 1: Mean (over 50 data samples) and 95% confidence interval for standardized MSE for the five methods on the three test functions with one-dimensional input. Method Function 1 Function 2 Function 3

2004

"... In PAGE 6: ... For the non-Bayesian neural network, 10, 50, and 3 hidden units were optimal for the three datasets, respectively. Table1 shows that the nonstationary GP does as well or better than the stationary GP, but that BARS does as well or better than the other methods on all three datasets with one input. Part of the difficulty for the nonstationary GP with the third function, which has the sharp jump, is that our parameterization forces smoothly-varying kernel matrices, which prevents our particular implementation from picking up sharp jumps.... ..."

Cited by 4

### Table 1: Mean (over 50 data samples) and 95% confidence interval for standardized MSE for the five methods on the three test functions with one-dimensional input. Method Function 1 Function 2 Function 3

2004

"... In PAGE 6: ... For the non-Bayesian neural network, 10, 50, and 3 hidden units were optimal for the three datasets, respectively. Table1 shows that the nonstationary GP does as well or better than the stationary GP, but that BARS does as well or better than the other methods on all three datasets with one input. Part of the difficulty for the nonstationary GP with the third function, which has the sharp jump, is that our parameterization forces smoothly-varying kernel matrices, which prevents our particular implementation from picking up sharp jumps.... ..."

Cited by 4

### Table 2: Accuracy results for the one dimensional Laplace equation with ghost cells deflned by constant extrapolation.

Cited by 5

### Table 3: Accuracy results for the one dimensional Laplace equation with ghost cells deflned by linear extrapolation.

Cited by 5

### Table 5: Accuracy results for the one dimensional Laplace equation with ghost cells deflned by cubic extrapolation.

Cited by 5

### Table 2: The one-dimensional categorisation of Stacheldraht

"... In PAGE 145: ... Often it is regarded as only attacking the host computer, the computer it is residing on, but this definition is far from being generally accepted. The new proposed definition, which can be seen in Table2 , is meant to be rather general. There are several categories which are marked with wildcard that might be used to narrow the definition a bit.... In PAGE 148: ...Worm 133 Table2 : The redefined term virus Category Alternative 0/1/wildcard Type atomic wildcard Type combined wildcard Violates confidentiality wildcard Violates integrity;parasitic 1 Violates integrity;non-parasitic wildcard Violates availability wildcard Dur. of effect temporary wildcard Dur.... ..."