### Table 2. Time complexity of SVMs (h)

2005

"... In PAGE 6: ... 5.2 Results on Complexity of SVMs To better understand the time complexity of SVMs, we logged their run-times, which are given in Table2 . Note that the testing time in this table is the total time for classifying all of the instances in the testing set.... In PAGE 6: ...Based on Table2 , we come to the following conclusions: 1) SCut dominates computations (about 80%) in the training process of both flat and hierarchical SVMs. 2) Flat SVMs cannot be used in very large-scale real-world applications due to their high computational complexity.... In PAGE 6: ...nly 0.0016s was needed for classifying each test instance. This result is very interesting: the classification of a dataset as large as the Yahoo! Directory is not as difficult as we previously imagined, if an appropriate method is used. Furthermore, with Table2 , we can validate the formulas theoretically derived in Section 4. For this purpose, we treat the complexities of flat SVMs as references: 310.... ..."

Cited by 6

### Table 3. Performance of SVMs

2004

"... In PAGE 6: ...eparate normal and attack patterns. We repeat this process for all classes. Training is done using the RBF (radial bias function) kernel option; an important point of the kernel function is that it defines the feature space in which the training set examples will be classified. Table3 summarizes the results of the experiments using SVMs. Table 3.... ..."

Cited by 3

### Table 9: Results on Modified Data Sets. probe binary SVMs RIPPER PNrule COG-OS

"... In PAGE 8: ... The Results. Table9 shows the classification results by various methods on the probe binary data set. As can be seen, COG-OS performs much better than pure SVM and RIPPER on predicting the rare class as well as the large class, while PNrule shows slightly higher F-measure on the rare class.... In PAGE 8: ... For data set r2l binary, however, COG-OS shows overwhelming advantages among all classifiers. As indicated in Table9 , the F-measure value of the rare class by COG-OS is 0.496, far more higher than the ones produced by the rest classifiers.... ..."

### Table 6 Performance of SVMs

2004

"... In PAGE 10: ....2. Experiments using SVMs as a classifier We used the radial basis function (RBF) kernel function that defines the feature space in which the training set examples will be classified. Table6 summarizes the results of the experiments S. Mukkamala et al.... ..."

### Table 1. Means Biases and Standard Deviations of OLS, FMOLS, and DOLS Estimators

1998

"... In PAGE 15: ... Four lags and two leads were used for the DOLS estimator. Table1 reports the Monte Carlo means and standard deviations (in parentheses) of ( H9252OLS H11002H9252), ( H9252FM H11002H9252), and ( H9252D H11002H9252) for sample sizes T = N= (20, 40, 60). The biases of the OLS estimator, H9252OLS, decrease at a rate of T.... In PAGE 28: ... On the other hand, H9252FM is more biased than H9252OLS when H926821 gt;0. In contrast, the results in Table1 show that the DOLS, H9252D, is distinctly superior to the OLS and FMOLS estimators for all cases in terms of the mean biases. It was noticeable that the FMOLS leads to a significant bias.... In PAGE 38: ... However, it goes beyond the scope of this chapter. From Table1 0, we note that the DOLS t-statistics tend to have heavier tails than predicted by the asymptotic distribution theory, though the bias of the DOLS t-statistic is much lower than those of the OLS and FMOLS t-statistics. It appears that the DOLS still is the best estimator overall in a heterogeneous panel.... ..."

Cited by 6

### Table 3. Means Biases and Standard Deviations of t-statistics

1998

"... In PAGE 29: ... In Figures 3, 5 and 7, the biases of the OLS and FMOLS were reduced as T increases, the DOLS still dominates the OLS and FMOLS. Monte Carlo means and standard deviations of the t-statistic, tH9252=H92520, are given in Table3 . Here, the OLS t-statistic is the conventional t-statistic as printed by standard statistical packages, and the FMOLS and DOLS t-statistics.... ..."

Cited by 6

### Table 1. Distance moduli calculated from BVRI PL relations calibrated with Hipparcos parallaxes of galactic Cepheids. The level of the bias is noted as: NB, for not biased; SB, for slightly biased; and B, for biased. In the last column, we give the distance moduli tentatively corrected for the bias. Galaxy

629

"... In PAGE 2: ... We thus obtain: a1 = 0:67 0:05, r0 = ?1:40 0:13, r1 = 1:04 0:05 and c0 = 0:58 0:02. The distance moduli for the 17 galaxies of our com- pilation are given in Table1 . These distances may still be subject to revision for two reasons: (i) biased galaxies may have larger distances (but the correc- tion for the bias is not obvious); (ii) The Lutz-Kelker (1973) bias may change the value of (preliminary tests suggest that distance moduli could be slightly... In PAGE 4: ... Further, the change of maximum and min- imum log P with distance is clearly visible (dotted lines). In order to take into account the e ect of the bias, we tentatively estimated the correction to be applied on the mean distance moduli given in Table1 . For Figure 8.... ..."

### Table 5. Means Biases and Standard Deviations of OLS, FMOLS, and DOLS Estimators

1998

Cited by 6

### Table 6. Means Biases and Standard Deviations of t-statistics

1998

Cited by 6

### Table 7. Means Biases and Standard Deviations of OLS, FMOLS, and DOLS Estimators

1998

Cited by 6