Results 11 - 20
of
3,752
Table 3. Statistics on Manual Interventions.
"... In PAGE 19: ... Time spent by the Automatic Processing. Manual Intervention Table3 gives statistics on manual interventions either for ocr correction or for re-treatment of the structure... ..."
Table 14: Allowable Manual Setpoints
"... In PAGE 15: ...able 13: Displayed Transducers..................................................................................................96 Table14 : Allowable Manual Setpoints.... ..."
Table 14: (MEAD vs. MANUAL EXTRACTS) compared to MANUAL SUMMARIES
2003
"... In PAGE 18: ...6 Multi-document content-based evaluation We will now present a short summary of the multi-document content-based evaluation. In Table14 we show a comparison between the performance of both MEAD and manual extracts (in this case, 50, 100, and 200 words... ..."
Cited by 5
Table 2: Manual Run Results
"... In PAGE 3: ... All manual runs were conducted over the database indexed by lexical terms. The results are presented in Table2 . In Figure 1 we show the precision-recall curves for all the runs.... ..."
Table 4, To compare the performance with manually
1999
Cited by 44
Table 5: Manual searches with feedback
1993
"... In PAGE 8: ... 5.3 Results The o#0Ecial results of the manual run #28 Table5 #29 are dis- appointing, with average precision 0.232 #2860#25 of topics below median#29, precision at 100 docs 0.... In PAGE 9: ... For the routing runs, where a consider- able amount of relevance information had contributed to the term weights, the improvement is less, but still very signi#0Ccant#28Table 4#29. For the manual feedbacksearches #28 Table5 #29 there was a small improvement when they were re-run with BM11 replacing BM15 in the #0Cnal it- eration. The drawback of these two models is that the theory says nothing about the estimation of the constants, or rather parameters, k 1 and k 2 .... In PAGE 10: ... 7.3 Interactive ad-hoc searching The result of this trial was disappointing except on pre- cision at 100 documents #28 Table5 #29, scarcely better than the o#0Ecial automatic ad-hoc run. On three topics it gavethebest result of anyofour runs, and twomore were good, but the remaining 45 ranged from poor to abysmal.... ..."
Cited by 10
Table 5: Manual searches with feedback
1993
"... In PAGE 8: ... 5.3 Results The o cial results of the manual run ( Table5 ) are dis- appointing, with average precision 0.232 (60% of topics below median), precision at 100 docs 0.... In PAGE 9: ... For the routing runs, where a consider- able amount of relevance information had contributed to the term weights, the improvement is less, but still very signi cant (Table 4). For the manual feedback searches ( Table5 ) there was a small improvement when they were re-run with BM11 replacing BM15 in the nal it- eration.The drawback of these two models is that the theory says nothing about the estimation of the constants, or rather parameters, k1 and k2.... In PAGE 10: ... 7.3 Interactive ad-hoc searching The result of this trial was disappointing except on pre- cision at 100 documents ( Table5 ), scarcely better than the o cial automatic ad-hoc run. On three topics it gave the best result of any of our runs, and two more were good, but the remaining 45 ranged from poor to abysmal.... ..."
Cited by 10
Tables Case Manual 1 Manual 2 Semi-automatic Automatic
Results 11 - 20
of
3,752