• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 11 - 20 of 3,752
Next 10 →

Table 3. Statistics on Manual Interventions.

in Retrospective Document Conversion - Application to the Library Domain
by A. Belaïd
"... In PAGE 19: ... Time spent by the Automatic Processing. Manual Intervention Table3 gives statistics on manual interventions either for ocr correction or for re-treatment of the structure... ..."

Table 14: Allowable Manual Setpoints

in Analysis and Design of the NASA Langley Cryogenic Pressure Box
by The Nasa Sti, David E. Glass, Jonathan C. Stevens, R. Frank Vause, Peter M. Winn, James F. Maguire, Glenn C. Driscoll, Charles L. Blackburn, Brian H. Mason
"... In PAGE 15: ...able 13: Displayed Transducers..................................................................................................96 Table14 : Allowable Manual Setpoints.... ..."

Table 14: (MEAD vs. MANUAL EXTRACTS) compared to MANUAL SUMMARIES

in Single-document and multi-document summary evaluation via relative utility
by Dragomir R. Radev, Daniel Tam 2003
"... In PAGE 18: ...6 Multi-document content-based evaluation We will now present a short summary of the multi-document content-based evaluation. In Table14 we show a comparison between the performance of both MEAD and manual extracts (in this case, 50, 100, and 200 words... ..."
Cited by 5

Table 2: Manual Run Results

in Experiments on Chinese Text Indexing ---CLARIT TREC-5 Chinese Track Report
by Xiang Tong, Chengxiang Zhai, David A. Evans
"... In PAGE 3: ... All manual runs were conducted over the database indexed by lexical terms. The results are presented in Table2 . In Figure 1 we show the precision-recall curves for all the runs.... ..."

Table 4, To compare the performance with manually

in Automatic Identification of Non-compositional Phrases
by Dekang Lin 1999
Cited by 44

Table 5: Manual searches with feedback

in Okapi at TREC-2
by S.E. Robertson, S. Walker, S. Jones, M.M. Hancock-Beaulieu, M. Gatford 1993
"... In PAGE 8: ... 5.3 Results The o#0Ecial results of the manual run #28 Table5 #29 are dis- appointing, with average precision 0.232 #2860#25 of topics below median#29, precision at 100 docs 0.... In PAGE 9: ... For the routing runs, where a consider- able amount of relevance information had contributed to the term weights, the improvement is less, but still very signi#0Ccant#28Table 4#29. For the manual feedbacksearches #28 Table5 #29 there was a small improvement when they were re-run with BM11 replacing BM15 in the #0Cnal it- eration. The drawback of these two models is that the theory says nothing about the estimation of the constants, or rather parameters, k 1 and k 2 .... In PAGE 10: ... 7.3 Interactive ad-hoc searching The result of this trial was disappointing except on pre- cision at 100 documents #28 Table5 #29, scarcely better than the o#0Ecial automatic ad-hoc run. On three topics it gavethebest result of anyofour runs, and twomore were good, but the remaining 45 ranged from poor to abysmal.... ..."
Cited by 10

Table 5: Manual searches with feedback

in Okapi at TREC-2
by S E Robertson, S Walker, S Jones, M M Hancock-Beaulieu, M Gatford 1993
"... In PAGE 8: ... 5.3 Results The o cial results of the manual run ( Table5 ) are dis- appointing, with average precision 0.232 (60% of topics below median), precision at 100 docs 0.... In PAGE 9: ... For the routing runs, where a consider- able amount of relevance information had contributed to the term weights, the improvement is less, but still very signi cant (Table 4). For the manual feedback searches ( Table5 ) there was a small improvement when they were re-run with BM11 replacing BM15 in the nal it- eration.The drawback of these two models is that the theory says nothing about the estimation of the constants, or rather parameters, k1 and k2.... In PAGE 10: ... 7.3 Interactive ad-hoc searching The result of this trial was disappointing except on pre- cision at 100 documents ( Table5 ), scarcely better than the o cial automatic ad-hoc run. On three topics it gave the best result of any of our runs, and two more were good, but the remaining 45 ranged from poor to abysmal.... ..."
Cited by 10

Tables Case Manual 1 Manual 2 Semi-automatic Automatic

in Running head: AUTOMATIC BRAIN TUMOR SEGMENTATION Rationale and Objectives
by Guido Gerig Phd, Marcel Prastawa

Table 3: Comparison with a manual

in Automatic Incremental State Saving
by Darrin West, Kiran Panesar 1996
Cited by 19

Table 4: An example of the manual aligment

in Opportunistic Semantic Tagging
by Luisa Bentivogli, Emanuele Pianta 2002
Cited by 3
Next 10 →
Results 11 - 20 of 3,752
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University