### Table 1. The quality of resource classification. The accurate probability P is not the same for different categories, according to our experience. We sample some popular categories of CDAL to present the accurate probability P

### Table 4. Classification method, fusion method, precision and recall scores of the three most accurate FMCMs. One reprenents inclusion of a classification method, zero exclusion.

### Table II shows that using the reduced features as found by mutual information one can obtain a result as good as using all 14 features when employing the KNN, MLP and SVM. In addition, it also suggests that the MLP and SVM provide more accurate classification than the LDA, KNN and GMM classifiers. Interestingly, one can work on the fuzzy dataset and still achieve comparable results. The values given in the first two rows of Table II for the dataset are the accuracy rate for just the fuzzy gaps. Since the rest of original dataset has already been classified with 99% accuracy, the classification for the whole dataset achieved by this quicker method can be calculated. For instance, the SVM classifier gives a full classification rate for the whole dataset, when processing a dataset involving 3361 fuzzy gaps among all 7462 gaps, as follows,

### Table 2. Classifier Accuracy. Classifiers with a higher number of classifications recognize more unique component instances. Those with a higher av- erage correlation are more accurate.

1999

"... In PAGE 8: ... Because all component instances should correlate closely to prior scenarios, no new instance classifications should result from the bigone sce- nario. Table2 lists the number of classifications iden- tified by each classifier, the number of new classification identified in the bigone scenario, the average number of instances per classification, and the average correlation between instance behavior and chosen profile for the Octarine bigone scenario. Table 3 lists the same values for IFCB classifier with limited depth stack walks.... In PAGE 8: ... Table 3 lists the same values for IFCB classifier with limited depth stack walks. (The called-by classifiers in Table2 walk the complete stack.) Instance Classifier Pr of i l e d C l as si f i c a ti on s Ne w ( bigon e ) Cl a s s i f i c a t i o n s A v e.... ..."

Cited by 77

### Table 2. Classifier Accuracy. Classifiers with a higher number of classifications recognize more unique component instances. Those with a higher av- erage correlation are more accurate.

1999

"... In PAGE 7: ... Because all component instances should correlate closely to prior scenarios, no new instance classifications should result from the bigone sce- nario. Table2 lists the number of classifications iden- tified by each classifier, the number of new classification identified in the bigone scenario, the average number of instances per classification, and the average correlation between instance behavior and chosen profile for the Octarine bigone scenario. Table 3 lists the same values for IFCB classifier with limited depth stack walks.... In PAGE 7: ... Table 3 lists the same values for IFCB classifier with limited depth stack walks. (The called-by classifiers in Table2 walk the complete stack.) Instance Classifier Pr of i l e d C l as si f i c a ti on s Ne w ( bigon e ) Cl a s s i f i c a t i o n s A v e.... ..."

Cited by 77

### Table 1: Character string classification performance as a function of system parameters

1997

"... In PAGE 3: ... The system was tested on 300 held-out strings. The classification performance is displayed in Table1 . It became evident that the context length plays a crucial role in accurate classification and is somewhat related to the length of the input string.... ..."

Cited by 3

### Table 1: Comparison of classification accuracies in percent.

"... In PAGE 10: ... Therefore, we compare to naive Bayes, a J48 decision tree and a support vector machine using a polynomial or a linear kernel function. Table1 dis- plays the achieved classification accuracy in percent for 10-fold cross validation. The results demonstrate that PAG offered highly accurate classification on all 14 data sets.... ..."

### Table 1: Comparison of classification accuracies in percent.

"... In PAGE 10: ... Therefore, we compare to naive Bayes, a J48 decision tree and a support vector machine using a polynomial or a linear kernel function. Table1 dis- plays the achieved classification accuracy in percent for 10-fold cross validation. The results demonstrate that PAG offered highly accurate classification on all 14 data sets.... ..."

### Table 9: Classification Results

in Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 2

2005

"... In PAGE 37: ...igrams from the data set and achieves 95.50% classification accuracy using Bayes Net. The fourth experiment produces the most accurate results achieved in this thesis. The specific results from the four experiments described in this thesis (Steck) are summarized in Table9 . Table 9: Classification Results ... ..."

### Table 3: Cross-validated classification success rates using antedependence models of order 1 (AD1) and 2 (AD2) for an infant at different ages. Rates show how accurately the categories of ASLEEP and AWAKE were classified as well as the overall classification success rate. Cross-validation was performed with the leave-one-out method of Lachenbruch amp; Mickey (1968).

1999

"... In PAGE 9: ... In this case, the NWPT representation results in C3 BP BDBCBEBE variables. Table3 shows the leave-one-out cross-validated success rates for the infant at different stages of development and suggests that this approach improves on the classification rates obtained earlier with the naive method. It also suggests that better classification may be possible with the older infant; a view that concurs with the one drawn using the naive method.... ..."

Cited by 2