Results 1 - 10
of
9,886
Table 2 summarizes the procedure. First, the num- ber of redundant negatives is reduced by creating the subset C, consistent with the training set. Then, the system removes those negative examples that partici- pate at Tomek links. In this way, the noisy and bor- derline examples are discarded, which leads to the new training set, T.
1997
"... In PAGE 4: ... Figure 4: The training set after the re- moval of redundant negative examples. Table2 : Algorithm for the one-sided selection of ex- amples. 1.... ..."
Cited by 63
Table 5 Gist classification with and without location information from the phone; the numerical score is the number of links to the gist. By biasing the classifier with the fact that the subject is in a restaurant, it becomes much easier to infer the topic of conversation from noisy transcripts.
2005
Cited by 16
Table 2: Performances of networks on the noisy nonlinear test data.
1996
"... In PAGE 9: ... There were no direct input to output links, and each network was trained for 1000 epochs before nishing. Table2 shows the average testing prediction errors (over ten trials) for the ve individual networks when trained with Equation (1), and when trained with the added penalties given in Equation (7) with constant. Each ensemble network was composed of two to ve individual networks.... ..."
Cited by 66
TABLE XI NOISY 102405141
TABLE XIII NOISY 102405341
Table4. Retrieval on Noisy Data
"... In PAGE 8: ... In contrast, when we applied the conventional retrieval approach based on the direct matching without using the FFT to derive the feature vectors, very low values of the precision and the recall were obtained as shown in Table 3 (b). Table4 represents the results for noisy data. 2 bytes are randomly chosen in each original 16 bytes sequence, and they are replaced by random numbers.... ..."
TABLE IX Noisy f(1).
1997
Cited by 25
TABLE XI Noisy f(3).
1997
Cited by 25
TABLE XIII Noisy f(5).
1997
Cited by 25
Results 1 - 10
of
9,886