### Table 1 Neural network architectures

2003

"... In PAGE 6: ...etter as the scale is increased, i.e. as the data becomes smoother. On the final smooth trend curve, resid(t)in Table1 , a crude linear extrapolation estimate, i.e.... In PAGE 6: ...avelet coefficients at higher frequency levels (i.e. lower scales) provided some benefit for estimating variation at less high frequency levels. Table1 sum- marizes what we did, and the results obtained. DRNN is the dynamic recurrent neural network model used.... In PAGE 6: ...sed. The architecture is shown in Fig. 3. The memory order of this network is equivalent to applying a time- lagged vector of the same size as the memory order. Hence the window in Table1 is the equivalent lagged vector length. In Table 1, NMSE is normalized mean squared error, DVS is direction variation symmetry (see above), and DS is directional symmetry, i.... In PAGE 6: ... Hence the window in Table 1 is the equivalent lagged vector length. In Table1 , NMSE is normalized mean squared error, DVS is direction variation symmetry (see above), and DS is directional symmetry, i.e.... In PAGE 7: ...ion of these results can be found in Ref. [4]. For further work involving the DRNN neural network resolution scale. From Table1 , we saw how these windows were of effective length 10, 15, 20, and 25 in terms of inputs to be considered. Fig.... ..."

### Table 1: Neural network estimation results

"... In PAGE 8: ... The decision whether to use the neural net estimation or the analysis tool results can be based on a cost function re ecting the required delity and criticality of the results. Table1 shows four test results of the neural network for the aerodynamic analysis tool. Best results were obtained when the training was done for 1000 cycles with the struc- ture shown in Figure 5 and the learning rate was set to 0.... ..."

### Table 7. Estimation Results of the Neural Network

"... In PAGE 25: ...5. The estimation results of each model are shown in Table7 . In this verification, the data were not adopted as estimation objects, except for those data that were evaluated as the highest and lowest within a set of visual objects.... ..."

### Table 2. Percent MAE for Neural Network Estimation Models on Test Set.

"... In PAGE 9: ...0 Results of Design Models The first results presented are the performance of neural networks for both the standard hold back training/testing procedure and the leave-k-out procedure. Table2 shows the percent mean absolute error (MAE) for both approaches, where the leave-k-out MAE is the average MAE over all the networks (15 for the Ball, Dilution Ratio and Granule Tests, and 11 for the Drain Line Test). Although it might appear that the hold back training strategy obtains lower error rates than the leave-k-out, this is misleading because the leave-k-out results are for testing on all observations instead of a 30% subset.... ..."

### Table 2: Performance with the bpa values estimated using neural networks.

### Table 1: Logit and Neural Network Forecasting Performance

"... In PAGE 13: ... 4.2 Forecasts Table1 gives one view of the comparative forecasting performance of the logit and neural network (NN) models. In the table, we divide the forecasts into 1s (for con ict) and 0s (for peace).... In PAGE 14: ... Furthermore, most of these gures are lower than their t to the training set, which is as it should be if we expect structure to exist but some real change in the world to occur. Table1 demonstrates that the neural network model discriminates far better than the logit model by assigning very di erent probabilities of international con ict to the available dyads. It does not indicate whether either model apos;s probability values are correct except for above and below the 0.... In PAGE 15: ...4 bin. Whereas the left graph of Table1 evaluates the t of the two models to the same data, the right graph uses the same technique to evaluate the success of the out-of-sample forecasts. This graph shows a reasonably close correspondence again between the estimated probabilities and observed fractions for both models.... ..."

### Table 5: Estimated and measured performance for strided loads vs. strided stores.

### Table 5: Estimated and measured performance for strided loads vs. strided stores.

### Table 3: Average error of the neural network and combining estimators. TRAINING TEST ESTIMATOR ERROR ERROR

"... In PAGE 14: ... Details on the implementation, and results with regression neural networks [56] and combining estimators, which use voting [1] and stacking [9, 61] strategies can be found in the same work. We summarize these results in Table3 . The training data they have used is a subset of ours, although the test set is the same.... ..."