### Table 3: FTP and TVTP Logistic Models for GDP

"... In PAGE 11: ... In light of this outcome, no similar two indicator TVTP analysis was undertaken. The sign and significance of the mean growth rates for the Hamilton FTP model in Table3 indicate classical business cycle behaviour, with the data being classified into positive and negative growth rate regimes, with a mean quarterly growth rate of .76% in regime 1 (expansion) and - .... In PAGE 11: ... Note that the model selection criteria AIC and SIC cannot be compared with the linear AR specification as an indication of regime switching non-linearity because of the non-standard conditions that are involved (see Hamilton and Perez- Quiros, 1996). The TVTP logistic results are also presented in Table3 . The estimates of the regime dependent means ( m0 + m1 and m0 ) associated with these models are statistically significant and again indicate classical business cycle behaviour in GDP.... In PAGE 30: ...ogarithms. The first difference also performed better at forecasting. 7 As the logistic model for the Treasury Bill yield time varying recession slope coefficient was insignificant a variation of this model was tried by fixing this probability to be zero. The resultant SIC for this model was lower than the reported model in Table3 , but the forecast MSFE was no better. 8 In these circumstances the statistic delivers 0/0.... ..."

### Table 4. AUC and EC results for over-sampled data and unpruned decision trees.

"... In PAGE 7: ... Moreover, in previous work [16] we showed that over-sampling methods seem to perform better than under-sampling methods, resulting in classi ers with higher AUC values. Table4 shows the AUC and EC values for two over-sampling methods proposed in the literature: Random over- sampling and Smote [7]. Random over-sampling randomly duplicates examples from the minority class while Smote introduces arti cially generated examples by interpolating two examples drawn from the minority class that lie together.... In PAGE 7: ... Random over-sampling randomly duplicates examples from the minority class while Smote introduces arti cially generated examples by interpolating two examples drawn from the minority class that lie together. Table4 reports results regarding unpruned trees. Besides our previous com- ments concerning pruning and class imbalance, whether pruning can lead to a performance improvement for decision trees grown over arti cially balanced da- ta sets still seems to be an open question.... In PAGE 7: ...em would prune based on false assumption, i.e., that the test set distribution matches the training set distribution. The results in Table4 show that, in general, the best AUC result obtained by an unpruned over-sampled data set is similar (less than 1% di erence) or higher than those obtained by pruned and unpruned trees grown over the original data sets. Moreover, unpruned over-sampled data sets also tend to produce higher EC values than pruned and unpruned trees grown over the original data sets.... In PAGE 8: ... This occurs when no Tomek links or just a few of them are found in the data sets. Table 6 shows a ranking of the AUC and EC results obtained in all ex- periments for unpruned decision trees, where: O indicates the original data set (Table 3) R and S stand respectively for Random and Smote over-sampling ( Table4 ) while S+E and S+T stand for Smote + ENN and Smote + Tomek (Table 5). p 1 indicates that the method is ranked among the best and p 2 among the second best for the corresponding data set.... ..."

### Table 4. AUC and EC results for over-sampled data and unpruned decision trees.

"... In PAGE 7: ... Moreover, in previous work [16] we showed that over-sampling methods seem to perform better than under-sampling methods, resulting in classi ers with higher AUC values. Table4 shows the AUC and EC values for two over-sampling methods proposed in the literature: Random over- sampling and Smote [7]. Random over-sampling randomly duplicates examples from the minority class while Smote introduces arti cially generated examples by interpolating two examples drawn from the minority class that lie together.... In PAGE 7: ... Random over-sampling randomly duplicates examples from the minority class while Smote introduces arti cially generated examples by interpolating two examples drawn from the minority class that lie together. Table4 reports results regarding unpruned trees. Besides our previous com- ments concerning pruning and class imbalance, whether pruning can lead to a performance improvement for decision trees grown over arti cially balanced da- ta sets still seems to be an open question.... In PAGE 7: ...em would prune based on false assumption, i.e., that the test set distribution matches the training set distribution. The results in Table4 show that, in general, the best AUC result obtained by an unpruned over-sampled data set is similar (less than 1% di erence) or higher than those obtained by pruned and unpruned trees grown over the original data sets. Moreover, unpruned over-sampled data sets also tend to produce higher EC values than pruned and unpruned trees grown over the original data sets.... In PAGE 8: ... This occurs when no Tomek links or just a few of them are found in the data sets. Table 6 shows a ranking of the AUC and EC results obtained in all ex- periments for unpruned decision trees, where: O indicates the original data set (Table 3) R and S stand respectively for Random and Smote over-sampling ( Table4 ) while S+E and S+T stand for Smote + ENN and Smote + Tomek (Table 5). p 1 indicates that the method is ranked among the best and p 2 among the second best for the corresponding data set.... ..."

### Table 4. Feature construction over sample data

### Table 4. Feature construction over sample data

### TABLE VII general characteristics of uci imbalanced data sets and the amount of over-sampling of the minority class we did for the SMOTE algorithm Data set # Attribute # Minority Class # Majority Class % Over Sampled

in One of...

### Table 2: Time for Sequential Sorting (sec.) Data Size Dist. Quick OverSampling

2003

"... In PAGE 9: ...he figure. The whole partition cost is about 20% of all cost. The total cost of distribution- based sorting is 20% less than Quick-sorting for large data set. The larger is the data set, the better is our method (see Table2 .... In PAGE 13: ... We now show that for non-uniform but well-behaved distributions, PD-based sorting outperforms quick-sort because of its better partitioning balance and speed. Table2 shows the time to sort integers of a normal distribution (m=3000, d=1000). We take 64K as the cache size for partitioning.... ..."

Cited by 1

### Table 5. AUC and EC results for over-sampled data: Smote + ENN and Smote + Tomek links and unpruned decision trees.

"... In PAGE 8: ... In this matter, these data cleaning methods might be understood as an alternative for pruning. Table5 shows the results of our proposed methods on the same data sets. Comparing these two methods it can be observed that Smote + Tomek produced the higher AUC values for four data sets (Sonar, Pima, German and Haberman) while Smote+ENN is better in two data sets (Bupa and Glass).... In PAGE 8: ... This occurs when no Tomek links or just a few of them are found in the data sets. Table 6 shows a ranking of the AUC and EC results obtained in all ex- periments for unpruned decision trees, where: O indicates the original data set (Table 3) R and S stand respectively for Random and Smote over-sampling (Table 4) while S+E and S+T stand for Smote + ENN and Smote + Tomek ( Table5 ). p 1 indicates that the method is ranked among the best and p 2 among the second best for the corresponding data set.... ..."

### Table 5. AUC and EC results for over-sampled data: Smote + ENN and Smote + Tomek links and unpruned decision trees.

"... In PAGE 8: ... In this matter, these data cleaning methods might be understood as an alternative for pruning. Table5 shows the results of our proposed methods on the same data sets. Comparing these two methods it can be observed that Smote + Tomek produced the higher AUC values for four data sets (Sonar, Pima, German and Haberman) while Smote+ENN is better in two data sets (Bupa and Glass).... In PAGE 8: ... This occurs when no Tomek links or just a few of them are found in the data sets. Table 6 shows a ranking of the AUC and EC results obtained in all ex- periments for unpruned decision trees, where: O indicates the original data set (Table 3) R and S stand respectively for Random and Smote over-sampling (Table 4) while S+E and S+T stand for Smote + ENN and Smote + Tomek ( Table5 ). p 1 indicates that the method is ranked among the best and p 2 among the second best for the corresponding data set.... ..."

### Table 3: Comparison between Data Moments and Model Moments Calculated from Different Parameter Estimates

2002

"... In PAGE 31: ... Based on this intuition, we exploit the analytical moments derived from the model to calculate the major summary statistics from alternative parameter estimates. Table3 presents the exact uncondi- tional moments calculated from various parameter estimates, together with those calculated from the data. Overall, the GMM and ECF estimates provide the skewness and kurtosis of... ..."

Cited by 2