### Table 3: FTP and TVTP Logistic Models for GDP

"... In PAGE 11: ... In light of this outcome, no similar two indicator TVTP analysis was undertaken. The sign and significance of the mean growth rates for the Hamilton FTP model in Table3 indicate classical business cycle behaviour, with the data being classified into positive and negative growth rate regimes, with a mean quarterly growth rate of .76% in regime 1 (expansion) and - .... In PAGE 11: ... Note that the model selection criteria AIC and SIC cannot be compared with the linear AR specification as an indication of regime switching non-linearity because of the non-standard conditions that are involved (see Hamilton and Perez- Quiros, 1996). The TVTP logistic results are also presented in Table3 . The estimates of the regime dependent means ( m0 + m1 and m0 ) associated with these models are statistically significant and again indicate classical business cycle behaviour in GDP.... In PAGE 30: ...ogarithms. The first difference also performed better at forecasting. 7 As the logistic model for the Treasury Bill yield time varying recession slope coefficient was insignificant a variation of this model was tried by fixing this probability to be zero. The resultant SIC for this model was lower than the reported model in Table3 , but the forecast MSFE was no better. 8 In these circumstances the statistic delivers 0/0.... ..."

### Table 4. AUC and EC results for over-sampled data and unpruned decision trees.

"... In PAGE 7: ... Moreover, in previous work [16] we showed that over-sampling methods seem to perform better than under-sampling methods, resulting in classi ers with higher AUC values. Table4 shows the AUC and EC values for two over-sampling methods proposed in the literature: Random over- sampling and Smote [7]. Random over-sampling randomly duplicates examples from the minority class while Smote introduces arti cially generated examples by interpolating two examples drawn from the minority class that lie together.... In PAGE 7: ... Random over-sampling randomly duplicates examples from the minority class while Smote introduces arti cially generated examples by interpolating two examples drawn from the minority class that lie together. Table4 reports results regarding unpruned trees. Besides our previous com- ments concerning pruning and class imbalance, whether pruning can lead to a performance improvement for decision trees grown over arti cially balanced da- ta sets still seems to be an open question.... In PAGE 7: ...em would prune based on false assumption, i.e., that the test set distribution matches the training set distribution. The results in Table4 show that, in general, the best AUC result obtained by an unpruned over-sampled data set is similar (less than 1% di erence) or higher than those obtained by pruned and unpruned trees grown over the original data sets. Moreover, unpruned over-sampled data sets also tend to produce higher EC values than pruned and unpruned trees grown over the original data sets.... In PAGE 8: ... This occurs when no Tomek links or just a few of them are found in the data sets. Table 6 shows a ranking of the AUC and EC results obtained in all ex- periments for unpruned decision trees, where: O indicates the original data set (Table 3) R and S stand respectively for Random and Smote over-sampling ( Table4 ) while S+E and S+T stand for Smote + ENN and Smote + Tomek (Table 5). p 1 indicates that the method is ranked among the best and p 2 among the second best for the corresponding data set.... ..."

### Table 4. AUC and EC results for over-sampled data and unpruned decision trees.

"... In PAGE 7: ... Moreover, in previous work [16] we showed that over-sampling methods seem to perform better than under-sampling methods, resulting in classi ers with higher AUC values. Table4 shows the AUC and EC values for two over-sampling methods proposed in the literature: Random over- sampling and Smote [7]. Random over-sampling randomly duplicates examples from the minority class while Smote introduces arti cially generated examples by interpolating two examples drawn from the minority class that lie together.... In PAGE 7: ... Random over-sampling randomly duplicates examples from the minority class while Smote introduces arti cially generated examples by interpolating two examples drawn from the minority class that lie together. Table4 reports results regarding unpruned trees. Besides our previous com- ments concerning pruning and class imbalance, whether pruning can lead to a performance improvement for decision trees grown over arti cially balanced da- ta sets still seems to be an open question.... In PAGE 7: ...em would prune based on false assumption, i.e., that the test set distribution matches the training set distribution. The results in Table4 show that, in general, the best AUC result obtained by an unpruned over-sampled data set is similar (less than 1% di erence) or higher than those obtained by pruned and unpruned trees grown over the original data sets. Moreover, unpruned over-sampled data sets also tend to produce higher EC values than pruned and unpruned trees grown over the original data sets.... In PAGE 8: ... This occurs when no Tomek links or just a few of them are found in the data sets. Table 6 shows a ranking of the AUC and EC results obtained in all ex- periments for unpruned decision trees, where: O indicates the original data set (Table 3) R and S stand respectively for Random and Smote over-sampling ( Table4 ) while S+E and S+T stand for Smote + ENN and Smote + Tomek (Table 5). p 1 indicates that the method is ranked among the best and p 2 among the second best for the corresponding data set.... ..."

### Table 4: The table below show the results for the first data set (DB1) using different amount for over-sampling.

"... In PAGE 5: ... Then we average these n closest instances, take the difference between each minority example (under consideration) and the av- erage instance, multiply this difference by a random number between 0 and 1, and add it to the original data set. Table4 outlines our oversampling algorithm. 5 EXPERIMENTAL RESULTS We tested our method using images taken in five wavelengths: u, g, r, i and z, i.... ..."

### Table 5. AUC and EC results for over-sampled data: Smote + ENN and Smote + Tomek links and unpruned decision trees.

"... In PAGE 8: ... In this matter, these data cleaning methods might be understood as an alternative for pruning. Table5 shows the results of our proposed methods on the same data sets. Comparing these two methods it can be observed that Smote + Tomek produced the higher AUC values for four data sets (Sonar, Pima, German and Haberman) while Smote+ENN is better in two data sets (Bupa and Glass).... In PAGE 8: ... This occurs when no Tomek links or just a few of them are found in the data sets. Table 6 shows a ranking of the AUC and EC results obtained in all ex- periments for unpruned decision trees, where: O indicates the original data set (Table 3) R and S stand respectively for Random and Smote over-sampling (Table 4) while S+E and S+T stand for Smote + ENN and Smote + Tomek ( Table5 ). p 1 indicates that the method is ranked among the best and p 2 among the second best for the corresponding data set.... ..."

### Table 5. AUC and EC results for over-sampled data: Smote + ENN and Smote + Tomek links and unpruned decision trees.

"... In PAGE 8: ... In this matter, these data cleaning methods might be understood as an alternative for pruning. Table5 shows the results of our proposed methods on the same data sets. Comparing these two methods it can be observed that Smote + Tomek produced the higher AUC values for four data sets (Sonar, Pima, German and Haberman) while Smote+ENN is better in two data sets (Bupa and Glass).... In PAGE 8: ... This occurs when no Tomek links or just a few of them are found in the data sets. Table 6 shows a ranking of the AUC and EC results obtained in all ex- periments for unpruned decision trees, where: O indicates the original data set (Table 3) R and S stand respectively for Random and Smote over-sampling (Table 4) while S+E and S+T stand for Smote + ENN and Smote + Tomek ( Table5 ). p 1 indicates that the method is ranked among the best and p 2 among the second best for the corresponding data set.... ..."

### Table 8: Number of rules (branches) for the original and over-sampled data sets and unpruned decision trees. Data set Original Rand Over Smote Smote+Tomek Smote+ENN

2004

"... In PAGE 7: ... The best results are shown in bold, and the best results obtained by an over-sampling method, not considering the results obtained in the original data sets, are highlighted with a light gray color. Figure 6 shows the results in Table8 in graphical form, where it can be observed that over-sampled data sets usu- ally lead to an increase in the number of induced rules if... ..."

Cited by 19

### Table 8: Number of rules (branches) for the original and over-sampled data sets and unpruned decision trees. Data set Original Rand Over Smote Smote+Tomek Smote+ENN

"... In PAGE 7: ... The best results are shown in bold, and the best results obtained by an over-sampling method, not considering the results obtained in the original data sets, are highlighted with a light gray color. Figure 6 shows the results in Table8 in graphical form, where it can be observed that over-sampled data sets usu- ally lead to an increase in the number of induced rules if... ..."

### Table 2: Performance metrics reported for the unfiltered possible binding sites with inputs sampled using random selection for under-sampling and SMOTE for over-sampling.

2006

"... In PAGE 4: ... 5.2 Results Table2 shows that almost all F-Scores with R-S(ENN) and R-S(Tomek) are improved when compared with each corresponding classifier on samplings from R-S. The SVM with R-S(Tomek) samplings gives the overall best F-Score and CC value, and also decreases the FP-Rate compared with simple R-S samplings.... ..."

Cited by 2

### Table 2: Linear Models for GDP

"... In PAGE 10: ...ffectively always unity. Such models were omitted from further consideration. These problems were not encountered with the exponential TVTP specification. Estimation Results First of all we briefly examine the linear model estimation and diagnostic test results presented in Table2 . With the exception of housing starts, all leading indicator models are chosen by SIC over the univariate random walk with drift specification (RW).... ..."