### Table 4. Comparison of the Least Squared Errors

"... In PAGE 17: ...4 Improving the Approximations As enhancements to the RCEM approximation capability, the Artificial Neural Networks (ANN) and the combined RSM+ANN approach proposed in Section 3 are applied to the responses for which the RSM is not sufficient. A comparison of the accuracy for models created by different methods are provided in Table4 . The least squared error (LSE), an overall average variation of the estimated values from the actual values, is the metric for this comparison.... In PAGE 17: ... It is observed that the LSEs are much less for the improved approximation models (either the ANN or the combined RSM+ANN model) compared to that of the RSM. Insert Table4 . Comparison of the Least Squared Errors... In PAGE 22: ... The first issue is to verify whether the approximation models are enhanced using the ANN and the combined RSM+ANN techniques. The results shown in Table4 and the graphs in Figure 7 prove that there is indeed an improvement in the approximation. Further verifications of this approach are done in this section.... In PAGE 30: ... Noise Parameters for the Engine Design Table 3. R-coefficients of the Response Surface Models Table4 . Comparison of the Least Squared Errors Table 5.... ..."

### Table 2. Mean and standard deviation of the true errors of the least-squares adjustment.

767

"... In PAGE 3: ...s that of minimum norm, i.e., among the in nite m- dimensional vectors (m is the number of rows of the desing matrix) satisfying the system of m equations, that with the smallest modulus is chosen. Table2 show the results of a typical sphere adjust- ment run on a set of simulated data. The individual di erences used to compute the aver- age values in Table 2 are the true errors de ned as ^ i = i(true)? i(adj) (and similarly for ^ i), where i(true) is the true Schwarzschild azimuth of star i and i(adj) is the value derived from the results of the least-squares adjustment; it is: i(adj) = ~ i + ~ i where ~ i is the least-sqares estimate for star i and ~ i the corresponding initial, catalogue, value used in the calculation of the coe cients of the linearized equa- tions.... In PAGE 3: ... Table 2 show the results of a typical sphere adjust- ment run on a set of simulated data. The individual di erences used to compute the aver- age values in Table2 are the true errors de ned as ^ i = i(true)? i(adj) (and similarly for ^ i), where i(true) is the true Schwarzschild azimuth of star i and i(adj) is the value derived from the results of the least-squares adjustment; it is: i(adj) = ~ i + ~ i where ~ i is the least-sqares estimate for star i and ~ i the corresponding initial, catalogue, value used in the calculation of the coe cients of the linearized equa- tions. The third column of Table 2 shows that there were few outliers that were removed before calculating the average errors.... In PAGE 3: ... The individual di erences used to compute the aver- age values in Table 2 are the true errors de ned as ^ i = i(true)? i(adj) (and similarly for ^ i), where i(true) is the true Schwarzschild azimuth of star i and i(adj) is the value derived from the results of the least-squares adjustment; it is: i(adj) = ~ i + ~ i where ~ i is the least-sqares estimate for star i and ~ i the corresponding initial, catalogue, value used in the calculation of the coe cients of the linearized equa- tions. The third column of Table2 shows that there were few outliers that were removed before calculating the average errors. These are typically stars with a crit- ically low number of connections.... In PAGE 3: ... On the other hand, because of the geometry of the arcs observed by GAIA, the error is expected to be larger than the error. From the values in Table2 we derive the... ..."

### Table 1. pMSE and Bias of Power Estimates for Two-sample t-Statistic with Parametric Bootstrap Critical Values O = 1000, I = 59

2000

"... In PAGE 12: ... We look at two (O; I) combinations, (O = 1000; I = 59) and (O = 596; I = 99), that have about 59000 computations each. Table1 reports estimates of the root mean squared error (pMSE) and bias 1000 of the various power estimates. The standard errors of the estimates are in the range .... In PAGE 12: ...002 for pMSE and around 2 for the bias 1000. The rst and seventh rows (p1) of Table1 give results for power estimates based on the true known t percentiles appropriate for normal data. They are labeled p1 to re ect the fact that resampling with I approaching 1 would give this result.... In PAGE 12: ... They are labeled p1 to re ect the fact that resampling with I approaching 1 would give this result. These of course are unbiased (the nonzero bias results in Table1 just re ect Monte Carlo variation), and here pMSE could have been calculated simply by ppower(1-power)=O. For a given O, p1 represents the best power estimates possible.... In PAGE 12: ... For these raw estimates the (O = 596; I = 99) situation is more e cient in terms of pMSE than (O = 1000; I = 59) for all but = 0:5 because the bias is a large factor except at = 0:5. The other estimators in Table1 are 1. b plin: the simple linear extrapolation method using (5) for the (O = 1000; I = 59) case and (6) for the (O = 596; I = 99) case.... In PAGE 13: ...a;bb) distribution. From Table1 we see that the the linear extrapolation estimators, b plin and b pgls, perform the best and very similarly. Their similarity is likely due to the fact (not displayed) that the estimated covariance matrix of the b pI used as dependent variables in the regressions has nearly equal diagonal elements and nearly equal o -diagonal elements.... ..."

Cited by 3

### Table 1 Convergence rates of the negative norm least-squares method.

### Table 1: Least Squares Monte Carlo for valuing American put options

"... In PAGE 5: ... The standard errors of the simulation estimates are reported as well. Table1 below shows that we can obtain fairly accurate results using the LSM method. The difference between PDE and LSM is very small.... ..."

### Table 4: L1 circle reconstruction error norms using the least squares method for the interface normal n.

1998

"... In PAGE 23: ...method is also second-order, as seen in Table4 . While the absolute errors in Table 4 are slightly larger than the Swartz method, the reconstruction solutions (Figure 13) are judged to be superior for both the circle and the square.... ..."

Cited by 22

### Table 1. Least squares estimates and 95% margins of error for the parameters of the bivariate AR(2) model (44).

2001

"... In PAGE 18: ... With the methods of the preceding sections, the AR(2) parameters were estimated from the simulated time series, the eigendecomposition of the estimated models was computed, and approximate 95% con dence intervals were constructed for the AR parameters and for the eigenmodes, periods, and damping times. Table1 shows, for each AR parameter = Bjk, the median of the least squares parameter estimates ^ = ^ Bjk and the median of the margins of error ^ belonging to the approximate 95% con dence intervals (41). Included in the table are the absolute values of the 2.... In PAGE 19: ...The simulation results in Table1 show that the least squares estimates of the AR parameters are biased when the sample size is small [cf. Tj stheim and Paulsen 1983; Mentz et al.... In PAGE 20: ... The function stands for a real part Re Sjk or an imaginary part Im Sjk of a component Sjk of an eigen- mode, or for a period Tk or a damping time k. As in Table1 , the symbols and + refer to the absolute values of the 2.5th percentile and of the 97.... ..."

Cited by 21

### Table I. Sensitivity Coefficients, Elemental Errors, and Total Uncertainty

Cited by 3

### Table I. Sensitivity Coefficients, Elemental Errors, and Total Uncertainty

### Table 2. First results with least squares method for Example 4.1.

1996

"... In PAGE 16: ...able 2. First results with least squares method for Example 4.1. In Table 3 we have the corresponding results, when the quasi{Newton algorithm is initialized with the solution of the equation error method in Table 1. Moreover, the computation was terminated when the value of the least squares cost functional Jls(b) agreed with the minimal value in Table2 with four decimals (which is strictly less than the interpolation error O(h2)). Noise % L1{error L2{error QN{iters CG{iters 0 1 10-6 3:8 10-2 3:5 10-3 16 584 3 1 10-5 1:5 10-1 2:7 10-2 16 546 6 5 10-5 2:0 10-1 3:7 10-2 25 854 10 5 10-5 2:1 10-1 4:5 10-2 20 731 20 1 10-4 2:3 10-1 6:1 10-2 28 1119 Table 3.... ..."

Cited by 1