### Table 1. Semi-supervised Learning with Deterministic An- nealing.

2006

"... In PAGE 5: ... The parameter T is decreased in an outer loop until the total entropy falls below a threshold. Table1 outlines the steps for the algorithm with default parameters. In the rest of this paper, we will abbreviate our method as DA (loss) where loss is l1 for hinge loss, l2 for quadratic hinge loss and sqr for squared loss.... ..."

Cited by 8

### Table 1. Semi-supervised Learning with Deterministic An- nealing.

2006

"... In PAGE 5: ... The parameter T is decreased in an outer loop until the total entropy falls below a threshold. Table1 outlines the steps for the algorithm with default parameters. In the rest of this paper, we will abbreviate our method as DA (loss) where loss is l1 for hinge loss, l2 for quadratic hinge loss and sqr for squared loss.... ..."

Cited by 8

### TABLE 9. Results of semi-supervised active learning Attribut

2005

Cited by 6

### Table 5: Recall of before and after semi-supervised learn- ing

### Table 1. Stability based semi-supervised learning algorithm

"... In PAGE 4: ... step the sample from the unlabeled data set which presents the highest stability with respect to the classifier. Table1 summarizes the proposed algorithm. 3 The problem With the recently appeared Wireless Capsule Video Endoscopy (WCVE) a new field of research is opened to study small intestine affections.... ..."

### Table 5: Semi-supervised KPCA is not necessary when supervised KPCA is very confident.

"... In PAGE 7: ... The predictions of the semi- supervised model are used for the remaining 6% of the test instances. Table5 shows that it is not necessary to use the semi-supervised training model for all the train- ing instances. In fact, when the supervised model is con- fident, its predictions are significantly more accurate than those of the semi-supervised model alone.... ..."

Cited by 2

### TABLE 5. Results of semi-supervised non-active learning Attribute Average

2005

Cited by 6