### Table A.7. Description of the classes constituting the first multi-label subset of Reuters-21578 Cum. Avg. No.

2000

Cited by 270

### Table 8: Description of the classes constituting the rst multi-label subset of Reuters-21578

"... In PAGE 17: ... Thus, once again, the di culty of the classi cation problem increases with k. A description of the dataset is given in Table8 . In all of the experiments with this data, we used three-fold cross validation.... In PAGE 17: ... We ran the versions with real-valued prediction for 10;000 rounds and the discrete versions 40;000 rounds. A summary of the results, averaged over the three folds, is given on Table8 and Figure 7. The results for this multi-label dataset are similar to the previous single-label datasets.... ..."

### Table 4. Accuracy of Model-s, Model-i and Model-x on both single-label and multi-label test cases. For multi-label case, we use T-criterion. See text for caveats in comparing accuracy in single- to multi-label cases.

"... In PAGE 8: ...-Criterion. We see that the precision and recall are slightly higher for Model-x in general. We can see that Model-x outperforms the other models in a multi-label classiflcation task. Table4 shows that for the single-label classiflcation task (where test examples are labeled with the single most obvious class), Model-x also outperforms the other models using T-Criterion. This is expected because Model-x is a richer training set with more exemplars per class.... ..."

### Table A.7. Description of the classes constituting the first multi-label subset of Reuters-21578 Cum. Avg. No.

### Table A.7. Description of the classes constituting the first multi-label subset of Reuters-21450 Cum. Avg. No.

### Table A.7. Description of the classes constituting the first multi-label subset of Reuters-21450 Cum. Avg. No.

in Machine Learning, 39(2/3):135-168, 2000. BoosTexter: A Boosting-based System for Text Categorization

### Table 11: Results for the second multi-labeled subset of Reuters-21578 (Table 10)

"... In PAGE 19: ... Again, we ran the real-valued version for 10;000 rounds and the discrete for 40;000. A summary of the results is given in Table11 and Figure 8. Here again we see comparable performance of the di erent boosting algorithms.... ..."

### Table 5: Accuracy of Model-s, Model-i and Model-x on both single-label and multi-label test cases. For multi-label case, we use T-criterion. See text for caveats in comparing accuracy in single- to multi-label cases.

2003

"... In PAGE 19: ... However, the changes are not substantial. Table5 shows that for the single-label classi cation task (where test examples are labeled with the single most obvious class), Model-x also outperforms the other models using T- Criterion. This is expected because Model-x is a richer training set with more exemplars per class.... ..."

Cited by 6

### Table A.9. Description of the classes constituting the second multi-label subset of Reuters-21578 Cum. Avg. No. Cum. Avg. No.

2000

Cited by 270

### Table 2: Training/testing time of using binary and pairwise approaches for multi-label problems. Note that the support vector ratio a14

in Abstract

"... In PAGE 4: ...ayer. As they both point to 23, in the third layer we consider only one node. Under the same assumption on the ratio a14 of support vectors, the total testing time is between a11 a23 a10 a6 a14 a6 a10a0 a1a10a24 and a11 a3 a10 a4 a6 a14 a6 a10a0 a1a10 a5 a2 depending on the number of nodes used. Table2 summarizes the training/testing com- plexity of both approaches. The main conclusion is that the pairwise approach has the complexity related to a0 , the average number of labels per sample.... ..."