### Table 4. Comparison of the Least Squared Errors

"... In PAGE 17: ...4 Improving the Approximations As enhancements to the RCEM approximation capability, the Artificial Neural Networks (ANN) and the combined RSM+ANN approach proposed in Section 3 are applied to the responses for which the RSM is not sufficient. A comparison of the accuracy for models created by different methods are provided in Table4 . The least squared error (LSE), an overall average variation of the estimated values from the actual values, is the metric for this comparison.... In PAGE 17: ... It is observed that the LSEs are much less for the improved approximation models (either the ANN or the combined RSM+ANN model) compared to that of the RSM. Insert Table4 . Comparison of the Least Squared Errors... In PAGE 22: ... The first issue is to verify whether the approximation models are enhanced using the ANN and the combined RSM+ANN techniques. The results shown in Table4 and the graphs in Figure 7 prove that there is indeed an improvement in the approximation. Further verifications of this approach are done in this section.... In PAGE 30: ... Noise Parameters for the Engine Design Table 3. R-coefficients of the Response Surface Models Table4 . Comparison of the Least Squared Errors Table 5.... ..."

### Table 3 Summaryof equations required for each recursive least-squares or constrained least-squares algorithm

2001

"... In PAGE 13: ... 1. Table3 shows a summaryof the equations required byeach algorithm, and Table 4 shows the number of multiplies required for each equation found in Table 3. Table 4 is not absolutelyaccurate because the number of multiplies required for matrix inver-... In PAGE 13: ... 1. Table 3 shows a summaryof the equations required byeach algorithm, and Table 4 shows the number of multiplies required for each equation found in Table3 . Table 4 is not absolutelyaccurate because the number of multiplies required for matrix inver-... ..."

### Table 1: Operators Used in Least-Squares State Esti-

"... In PAGE 2: ... Depending on the operator in question, can either be integer and mean operator multiplicity or just a real parameter. Three examples of pseudodi erential operators used in the literature for designing state es- timators are given in Table1 . The use of the di eren- tial operator P d is classical (e.... ..."

### Table 7. Least Squared Errors for RSM and the Combined RSM+ANN

"... In PAGE 22: ...pproximation. Further verifications of this approach are done in this section. The first issue is whether the combined RSM+ANN technique is better than the actual RSM. In Table7 , the least squared errors (LSEs) for the two models are compared for various responses from the P amp;W propulsion system design problem. The grid design for 121 experiments is used as the input to test the LSEs.... In PAGE 22: ... This makes the model more accurate, which is a good reason to use this technique. Insert Table7 . Least Squared Errors for RSM and the Combined RSM+ANN The second part of verification of the combined RSM+ANN model is see how efficient this approach is compared to using the pure ANN models.... In PAGE 30: ... Engine Design Optimization Results Using Type I Robust Design Table 6. Optimization Results Using Type II Robust Design Table7 . Least Squared Errors for RSM and the Combined RSM+ANN Table 8.... ..."

### Table IV shows the reduction in the residual error at the fine grid versus filter length. The column labeled as Normal in this table indicates the reduction in residual error for normal inci- dence angles. The column labeled as Minimax indicates the min- imum improvement at all angles corresponding to the maximum residual error at all times. MATLAB1 parameters denotes the input parameters used to design the filters for specific lengths. As an example, the MATLAB command corresponding to a FIR filter of size 29 designed by least-squares is firls(28, [0 .2 .4 1], [1 1 0 0]). This table shows that the minimum filter length re- quired to yield improvements for all angles is around 15.

### Table 1. pMSE and Bias of Power Estimates for Two-sample t-Statistic with Parametric Bootstrap Critical Values O = 1000, I = 59

2000

"... In PAGE 12: ... We look at two (O; I) combinations, (O = 1000; I = 59) and (O = 596; I = 99), that have about 59000 computations each. Table1 reports estimates of the root mean squared error (pMSE) and bias 1000 of the various power estimates. The standard errors of the estimates are in the range .... In PAGE 12: ...002 for pMSE and around 2 for the bias 1000. The rst and seventh rows (p1) of Table1 give results for power estimates based on the true known t percentiles appropriate for normal data. They are labeled p1 to re ect the fact that resampling with I approaching 1 would give this result.... In PAGE 12: ... They are labeled p1 to re ect the fact that resampling with I approaching 1 would give this result. These of course are unbiased (the nonzero bias results in Table1 just re ect Monte Carlo variation), and here pMSE could have been calculated simply by ppower(1-power)=O. For a given O, p1 represents the best power estimates possible.... In PAGE 12: ... For these raw estimates the (O = 596; I = 99) situation is more e cient in terms of pMSE than (O = 1000; I = 59) for all but = 0:5 because the bias is a large factor except at = 0:5. The other estimators in Table1 are 1. b plin: the simple linear extrapolation method using (5) for the (O = 1000; I = 59) case and (6) for the (O = 596; I = 99) case.... In PAGE 13: ...a;bb) distribution. From Table1 we see that the the linear extrapolation estimators, b plin and b pgls, perform the best and very similarly. Their similarity is likely due to the fact (not displayed) that the estimated covariance matrix of the b pI used as dependent variables in the regressions has nearly equal diagonal elements and nearly equal o -diagonal elements.... ..."

Cited by 3

### Table 8. Least Squared Errors for Varying Number of Hidden Nodes

"... In PAGE 22: ... It is found that the combined RSM+ANN approach is less sensitive to the number of nodes chosen than the pure ANN approach. This is evident in the comparisons of LSEs made in Table8 . The number of... In PAGE 23: ...Insert Table8 . Least Squared Errors for Varying Number of Hidden Nodes We have also confirmed the optimization results obtained from the approximation models by verifying them using the real engine analysis program SOAPP.... In PAGE 30: ... Optimization Results Using Type II Robust Design Table 7. Least Squared Errors for RSM and the Combined RSM+ANN Table8 . Least Squared Errors for Varying Number of Hidden Nodes... ..."

### Table 4: Comparison of stochastic frontier and least-squares estimates Stochastic frontier Ordinary least squares

"... In PAGE 4: ... 18 Table 3: Simulated changes in production for selected variables. 19 Table4 : Comparison of stochastic frontier and least-squares estimates 20 Table 5: Estimation results from restricted models. 21 Table 6: Tested restrictions about model specification 22 Table 7: Estimated model parameters from full panel and from balanced panel 23 Figure 1: Area planted to rice and rice production in Bicol for 1978, 1983 and 1994 17 ... ..."

### Table 3: Two-Stage Least-Squares Estimates

"... In PAGE 9: ... We next conduct a 2SLS analysis in which LNDIST, INITIAL, EDU, REF, LPRIV and SPRIV are used as instruments for NEWENT in a first-stage regression, and the fitted value of NEWENT is combined with the remaining variables (IO, DEFENSE and PRICE) in a second- stage growth regression. The estimates we obtain are reported in Table3 . Consider first the NEWENT regression.... In PAGE 9: ... The coefficients on LNDIST and SPRIV are statistically insignificant, but the remaining variables each appear to have a significant relationship with NEWENT, both statistically (at the 5% level) and quantitatively. To characterize quantitative significance, we report the impact on NEWENT of a one-standard-deviation increase in each variable in the last column of Table3 . For example, a one-standard-deviation increase in initial income (representing an 79% increase in the purchasing power of money income per capita in 1993:IV, as reported in the sixth column of Table 3) corresponds with an additional 0.... In PAGE 9: ... To characterize quantitative significance, we report the impact on NEWENT of a one-standard-deviation increase in each variable in the last column of Table 3. For example, a one-standard-deviation increase in initial income (representing an 79% increase in the purchasing power of money income per capita in 1993:IV, as reported in the sixth column of Table3 ) corresponds with an additional 0.549 additional new enterprises per 1000 inhabitants on average across regions.... In PAGE 10: ... A similar observation holds for the reformist voting proxy. The 2SLS estimates reported in Table3 are of course based on identifying restrictions used to select instruments for NEWENT in the first-stage regression. The restrictions involve the exclusion of the variables used as instruments from the second-stage growth regression.... In PAGE 10: ...586), while the quantitative significance of the remaining variables is negligible by comparison. Thus there is reasonable empirical support for the identifying assumptions upon which the results of Table3 are based. We conclude our analysis with an assessment of the influence of the outlier regions.... ..."