### Table 5. Performance of the Varimax and Partial Least Squares Estimators

"... In PAGE 21: ... The means of the 100 squared distances (empirical MSE estimates) for each of the other PCR methods and each of the 32 simulation settings are provided in Table 4. Analogous results for the varimax and PLSR estimators are provided in Table5 . For each of the 32 simulation settings, the OLSR estimator along with the other estimators discussed in Section 6 were ranked according to their empirical MSE values with lowest ranks corresponding to lowest empirical MSE.... ..."

### Table 3. Principal, Varimax, and Partial Least Squares Components

"... In PAGE 20: ... Methods 3(c) and 3(d) use the first two PLS components to estimate a250 . The coefficients of the linear combinations that provide these components are found in the last two lines of Table3 . These PLS compo- nents are more difficult to interpret than the principal components or the rotated principal components in this example.... ..."

### Table 3: Regression coefficients for the aboveground biomass model and DAIS spectral bands based on generalised least squares.

"... In PAGE 9: ... Figure 2 shows the section of the DAIS image to which the method was applied as a false colour combination. Table3 shows the regression coefficients for the Peyne area. This method results in a smooth and blurred map, and ignores the spatial variability in the residuals of the regression function.... ..."

### Table 2: Estimated costs based on least square fit of model to measurements

1998

"... In PAGE 17: ... Before we analyze the results, we would like to increase our confidence in the model. We do this by comparing the model estimates in Table2 with direct measurements of some important sources of communication overhead that can easily be classified.... In PAGE 18: ...his is mostly per-packet overhead, although there is some per-byte overhead, e.g. buffer manage- ment, and also some per-write TCP state management overhead included. There is a good match with the result in Table2 : 130 versus 99 microseconds.... In PAGE 18: ...6 and 10.4 microseconds/KByte respectively; this accounts for about 60% of the cost in Table2 . For the single-copy stack, an easy to identify per-byte cost is the incremental overhead for pinning, unpinning and mapping pages (Table 3).... In PAGE 18: ...0.4 microseconds/KByte respectively; this accounts for about 60% of the cost in Table 2. For the single-copy stack, an easy to identify per-byte cost is the incremental overhead for pinning, unpinning and mapping pages (Table 3). Our measurements show that it accounts for more than 60% of the per-byte cost ( Table2 ). Examples of operations that contribute to the per-byte overhead in ways that are hard to quantify are managing mbufs, writing SDMA requests to the adapter, and... In PAGE 19: ...1.4 Discussion The results in Table2 can be used to explain the difference in efficiency between the single-copy and traditional stack.... ..."

Cited by 3

### Table 4. Linear Least Squares Coefficients for VOCs Versus NOya

2003

"... In PAGE 6: ... The underestimate of NOy in the urban plume in the model base case is also evident in Figure 6. [28] Figure 7 and Table4 show the correlation between summed anthropogenic VOCs and NOy from measurements and from the standard model scenario along the aircraft trajectory for 17 July. Summed VOCs have been expressed as propene-equivalent carbon [Chameides et al.... In PAGE 7: ... Peak measured VOC levels are significantly lower than the model maximum values, but this may be the result of the relatively long averaging time (10 min) for the individual measure- ments. Measured VOCs are higher than model VOCs for equivalent NOy, but the slope between VOCs and NOy ( Table4 ) is slightly lower in the measurements than in the standard model scenario. [29] Unlike other VOCs, isoprene (Figure 8) does not correlate with NOy.... ..."

Cited by 1

### TABLE 2 Predictive performance of competing methods: LM is a main-effects linear model with least squares fitting; LARS is least angle regression with main effects and CV shrinkage selection; LARS two-way Cp is least angle regression with main effects and all two-way interactions, shrinkage selection via Cp; GBM additive and GBM two-way use least squares boosting, the former using main effects only, the latter using main effects and all two-way interactions; MSE is mean square error on a 10% holdout sample; MAD is mean absolute deviation

### Table 4: Boosting with componentwise linear least squares for ozone data with first order- interactions (n = 330, p = 45). Squared prediction error and average number of selected predictor variables using 10-fold cross-validation.

2006

"... In PAGE 16: ... (#) gMDL-sel-L2Boost selects SparseL2Boost as the better method. In summary, while SparseL2Boost is about as good as L2Boosting in terms of predictive accu- racy, see Table4 , it yields a sparser model fit, see Tables 4 and 5. 3.... ..."

Cited by 2

### Table 1. Least squares estimates and 95% margins of error for the parameters of the bivariate AR(2) model (44).

2001

"... In PAGE 18: ... With the methods of the preceding sections, the AR(2) parameters were estimated from the simulated time series, the eigendecomposition of the estimated models was computed, and approximate 95% con dence intervals were constructed for the AR parameters and for the eigenmodes, periods, and damping times. Table1 shows, for each AR parameter = Bjk, the median of the least squares parameter estimates ^ = ^ Bjk and the median of the margins of error ^ belonging to the approximate 95% con dence intervals (41). Included in the table are the absolute values of the 2.... In PAGE 19: ...The simulation results in Table1 show that the least squares estimates of the AR parameters are biased when the sample size is small [cf. Tj stheim and Paulsen 1983; Mentz et al.... In PAGE 20: ... The function stands for a real part Re Sjk or an imaginary part Im Sjk of a component Sjk of an eigen- mode, or for a period Tk or a damping time k. As in Table1 , the symbols and + refer to the absolute values of the 2.5th percentile and of the 97.... ..."

Cited by 21

### Table 2: Categorization of unsupervised frequent pattern mining methods based on time interval data models.

"... In PAGE 8: ... Each group is then converted to a partial order. In Table2 the properties of the described approaches for pattern discovery in interval series and sequences are listed in order of increasing expressivity of the pattern language and earlier publication. All except the second method work on multivariate interval series and interval sequences.... ..."

### Table 4. Comparison of the Least Squared Errors

"... In PAGE 17: ...4 Improving the Approximations As enhancements to the RCEM approximation capability, the Artificial Neural Networks (ANN) and the combined RSM+ANN approach proposed in Section 3 are applied to the responses for which the RSM is not sufficient. A comparison of the accuracy for models created by different methods are provided in Table4 . The least squared error (LSE), an overall average variation of the estimated values from the actual values, is the metric for this comparison.... In PAGE 17: ... It is observed that the LSEs are much less for the improved approximation models (either the ANN or the combined RSM+ANN model) compared to that of the RSM. Insert Table4 . Comparison of the Least Squared Errors... In PAGE 22: ... The first issue is to verify whether the approximation models are enhanced using the ANN and the combined RSM+ANN techniques. The results shown in Table4 and the graphs in Figure 7 prove that there is indeed an improvement in the approximation. Further verifications of this approach are done in this section.... In PAGE 30: ... Noise Parameters for the Engine Design Table 3. R-coefficients of the Response Surface Models Table4 . Comparison of the Least Squared Errors Table 5.... ..."