### Table 1. pMSE and Bias of Power Estimates for Two-sample t-Statistic with Parametric Bootstrap Critical Values O = 1000, I = 59

2000

"... In PAGE 12: ... We look at two (O; I) combinations, (O = 1000; I = 59) and (O = 596; I = 99), that have about 59000 computations each. Table1 reports estimates of the root mean squared error (pMSE) and bias 1000 of the various power estimates. The standard errors of the estimates are in the range .... In PAGE 12: ...002 for pMSE and around 2 for the bias 1000. The rst and seventh rows (p1) of Table1 give results for power estimates based on the true known t percentiles appropriate for normal data. They are labeled p1 to re ect the fact that resampling with I approaching 1 would give this result.... In PAGE 12: ... They are labeled p1 to re ect the fact that resampling with I approaching 1 would give this result. These of course are unbiased (the nonzero bias results in Table1 just re ect Monte Carlo variation), and here pMSE could have been calculated simply by ppower(1-power)=O. For a given O, p1 represents the best power estimates possible.... In PAGE 12: ... For these raw estimates the (O = 596; I = 99) situation is more e cient in terms of pMSE than (O = 1000; I = 59) for all but = 0:5 because the bias is a large factor except at = 0:5. The other estimators in Table1 are 1. b plin: the simple linear extrapolation method using (5) for the (O = 1000; I = 59) case and (6) for the (O = 596; I = 99) case.... In PAGE 13: ...a;bb) distribution. From Table1 we see that the the linear extrapolation estimators, b plin and b pgls, perform the best and very similarly. Their similarity is likely due to the fact (not displayed) that the estimated covariance matrix of the b pI used as dependent variables in the regressions has nearly equal diagonal elements and nearly equal o -diagonal elements.... ..."

Cited by 3

### Table 3: Two-Stage Least-Squares Estimates

"... In PAGE 9: ... We next conduct a 2SLS analysis in which LNDIST, INITIAL, EDU, REF, LPRIV and SPRIV are used as instruments for NEWENT in a first-stage regression, and the fitted value of NEWENT is combined with the remaining variables (IO, DEFENSE and PRICE) in a second- stage growth regression. The estimates we obtain are reported in Table3 . Consider first the NEWENT regression.... In PAGE 9: ... The coefficients on LNDIST and SPRIV are statistically insignificant, but the remaining variables each appear to have a significant relationship with NEWENT, both statistically (at the 5% level) and quantitatively. To characterize quantitative significance, we report the impact on NEWENT of a one-standard-deviation increase in each variable in the last column of Table3 . For example, a one-standard-deviation increase in initial income (representing an 79% increase in the purchasing power of money income per capita in 1993:IV, as reported in the sixth column of Table 3) corresponds with an additional 0.... In PAGE 9: ... To characterize quantitative significance, we report the impact on NEWENT of a one-standard-deviation increase in each variable in the last column of Table 3. For example, a one-standard-deviation increase in initial income (representing an 79% increase in the purchasing power of money income per capita in 1993:IV, as reported in the sixth column of Table3 ) corresponds with an additional 0.549 additional new enterprises per 1000 inhabitants on average across regions.... In PAGE 10: ... A similar observation holds for the reformist voting proxy. The 2SLS estimates reported in Table3 are of course based on identifying restrictions used to select instruments for NEWENT in the first-stage regression. The restrictions involve the exclusion of the variables used as instruments from the second-stage growth regression.... In PAGE 10: ...586), while the quantitative significance of the remaining variables is negligible by comparison. Thus there is reasonable empirical support for the identifying assumptions upon which the results of Table3 are based. We conclude our analysis with an assessment of the influence of the outlier regions.... ..."

### Table 1: Comparison of RMS positioning errors for the robot end e ector, measured using the test set, for a standard neural network trained by least-squares, and for a Mixture Density Network.

"... In PAGE 6: ... It is clear that the positioning errors are reduced dramatically compared to the least-squares results shown in Figure 14. A comparison of the RMS positioning errors for the two approaches is given in Table1 , which shows that the MDN gives an order of magnitude reduction in RMS error compared to the least-squares approach.... ..."

### Table 4. Comparison of the Least Squared Errors

"... In PAGE 17: ...4 Improving the Approximations As enhancements to the RCEM approximation capability, the Artificial Neural Networks (ANN) and the combined RSM+ANN approach proposed in Section 3 are applied to the responses for which the RSM is not sufficient. A comparison of the accuracy for models created by different methods are provided in Table4 . The least squared error (LSE), an overall average variation of the estimated values from the actual values, is the metric for this comparison.... In PAGE 17: ... It is observed that the LSEs are much less for the improved approximation models (either the ANN or the combined RSM+ANN model) compared to that of the RSM. Insert Table4 . Comparison of the Least Squared Errors... In PAGE 22: ... The first issue is to verify whether the approximation models are enhanced using the ANN and the combined RSM+ANN techniques. The results shown in Table4 and the graphs in Figure 7 prove that there is indeed an improvement in the approximation. Further verifications of this approach are done in this section.... In PAGE 30: ... Noise Parameters for the Engine Design Table 3. R-coefficients of the Response Surface Models Table4 . Comparison of the Least Squared Errors Table 5.... ..."

### Table 5: Nonlinear Least Squares Results

"... In PAGE 1: ... In addition, an F-test indicated that the hypothesis that the two rates are the same could not be rejected at conventional significance levels. The 1,775 observations in Table5 represent all 1981 through 1987 vintage passenger cars owned by RTECS respondents in July of 1988 for which complete data were available. Model year 1988 new cars are considered separately from the older vehicles in the household stock because an F-test indicated that the two samples could not be pooled.... In PAGE 8: ... If however, there is no relationship between life cycle fuel expenditures and vehicle price, the capitalization rate would be zero. Based on the regression estimates in Table5 for the sample of pre-1988 vehicles in household holdings, the estimated mean willingness-to-pay for a one dollar change in life cycle operating costs, is $ 0.39.... In PAGE 11: ...8 million--l6 percent higher-- indicating that excluding a separate measurement for nonfatal injuries causes the fatality valuation to reflect the value of nonfatal injuries. The third regression result in Table5 demonstrates the importance of including controls for the driver characteristics in fatal accidents. The mortality risk measure used in the model does not represent a pure measure of automobile-specific risk because driver characteristics are not excised from the rates.... In PAGE 11: ... As defined in section II, these controls measure the proportion of fatal accidents occurring in each make/model/year vehicle that reflect the characteristic in question. The first column in Table5 indicates that the proportion of drivers who are young, those who are older, and those wearing seat belts were all statistically significant at the 0.05 level, and alcohol involvement was significant at the 0.... ..."

### Table 2: A non-linear least-squares fit by grid search. a) frequency grid AXCX, CX BP BDBN BM BM BM BN C5

"... In PAGE 3: ...1 with AX there replaced by CM AX given by (14). The maximization of (14) may be implemented by iterative methods, or simply by a one-dimensional grid search, as presented in Table2 . The frequency grid may be obtained by peak-picking the DFT of data and with AXBD correspond- ing to the frequency bin to the left of the maxima, and AXC5 corresponding to the bin on the opposite side.... In PAGE 3: ... As estimator of the initial frequency a DFT based esti- mator was used with 4 times zero-padding, and peak-finding by triple parabolic interpolation. The grid search for the algorithm in Table2 was performed (rather arbitrary) in C5 BP BDBIBC points in the symmetric interval (of length cor- responding to twice the frequency resolution of the DFT) around the maxima of the DFT. The initial estimate of the nuisance parameters required for the algorithm in Table 1 was estimated using the 3-parameter fit with CM AXBC in place of AX.... In PAGE 3: ...ithms for frequencies well inside (0, 0.5). For frequencies near 0 or 0.5, the algorithm in Table2 outperforms the one in [3]. 4.... ..."

### Table 4. Linear Least Squares Coefficients for VOCs Versus NOya

2003

"... In PAGE 6: ... The underestimate of NOy in the urban plume in the model base case is also evident in Figure 6. [28] Figure 7 and Table4 show the correlation between summed anthropogenic VOCs and NOy from measurements and from the standard model scenario along the aircraft trajectory for 17 July. Summed VOCs have been expressed as propene-equivalent carbon [Chameides et al.... In PAGE 7: ... Peak measured VOC levels are significantly lower than the model maximum values, but this may be the result of the relatively long averaging time (10 min) for the individual measure- ments. Measured VOCs are higher than model VOCs for equivalent NOy, but the slope between VOCs and NOy ( Table4 ) is slightly lower in the measurements than in the standard model scenario. [29] Unlike other VOCs, isoprene (Figure 8) does not correlate with NOy.... ..."

Cited by 1

### Table 1: Simulated ISAR(1) model: ordinary least squares estimates Constant Exponential Reciprocal

1995

"... In PAGE 14: ... 4i apos;s are sampled from a Gamma distribution with parameter = 2 and = 0:5 (giving the mean ticking frequency of 1) and i apos;s are sampled from a Normal distribution with mean 0 and variance 1. The ordinary least squares estimates together with the AIC values from three di erent functions are shown in Table1 . Our approach is successful in two aspects: First, the OLS estimates are all very close to the values we sampled from ( = 0:3): (i) constant function (^ = 0:28), (ii) exponential function ( ^ a = 0:31; ^ b = 0:23), (iii) reciprocal function ( ^ a = 0:29; ^ b = 0:30), Second, we are able to select the correct model based on AIC.... In PAGE 14: ... Our approach is successful in two aspects: First, the OLS estimates are all very close to the values we sampled from ( = 0:3): (i) constant function (^ = 0:28), (ii) exponential function ( ^ a = 0:31; ^ b = 0:23), (iii) reciprocal function ( ^ a = 0:29; ^ b = 0:30), Second, we are able to select the correct model based on AIC. More interesting results can be seen from Table1 . The estimated parameter (^ = 0:41) modeled by the constant function with data sampled from the exponential function is very closed to 0:42 which is value in (5) with = 2 and = 0:5.... ..."

Cited by 1

### Table 2: Estimated costs based on least square fit of model to measurements

1998

"... In PAGE 17: ... Before we analyze the results, we would like to increase our confidence in the model. We do this by comparing the model estimates in Table2 with direct measurements of some important sources of communication overhead that can easily be classified.... In PAGE 18: ...his is mostly per-packet overhead, although there is some per-byte overhead, e.g. buffer manage- ment, and also some per-write TCP state management overhead included. There is a good match with the result in Table2 : 130 versus 99 microseconds.... In PAGE 18: ...6 and 10.4 microseconds/KByte respectively; this accounts for about 60% of the cost in Table2 . For the single-copy stack, an easy to identify per-byte cost is the incremental overhead for pinning, unpinning and mapping pages (Table 3).... In PAGE 18: ...0.4 microseconds/KByte respectively; this accounts for about 60% of the cost in Table 2. For the single-copy stack, an easy to identify per-byte cost is the incremental overhead for pinning, unpinning and mapping pages (Table 3). Our measurements show that it accounts for more than 60% of the per-byte cost ( Table2 ). Examples of operations that contribute to the per-byte overhead in ways that are hard to quantify are managing mbufs, writing SDMA requests to the adapter, and... In PAGE 19: ...1.4 Discussion The results in Table2 can be used to explain the difference in efficiency between the single-copy and traditional stack.... ..."

Cited by 3

### Table 7. Least Squared Errors for RSM and the Combined RSM+ANN

"... In PAGE 22: ...pproximation. Further verifications of this approach are done in this section. The first issue is whether the combined RSM+ANN technique is better than the actual RSM. In Table7 , the least squared errors (LSEs) for the two models are compared for various responses from the P amp;W propulsion system design problem. The grid design for 121 experiments is used as the input to test the LSEs.... In PAGE 22: ... This makes the model more accurate, which is a good reason to use this technique. Insert Table7 . Least Squared Errors for RSM and the Combined RSM+ANN The second part of verification of the combined RSM+ANN model is see how efficient this approach is compared to using the pure ANN models.... In PAGE 30: ... Engine Design Optimization Results Using Type I Robust Design Table 6. Optimization Results Using Type II Robust Design Table7 . Least Squared Errors for RSM and the Combined RSM+ANN Table 8.... ..."