• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 38,959
Next 10 →

Table 2. Non-interpolated average precision values for different text-to-text retrieval methods targeting the 10GB and 100GB collections.

in Evaluating Speech-Driven IR in the NTCIR-3 Web Retrieval Task
by Atsushi Fujii, Katunobu Itou
"... In PAGE 5: ... As a result, for each of the above four relevance assessment types, we investigated non-interpolated av- erage precision values of four different methods, as shown in Table 2. By looking at Table2 , there was no significant difference among the four methods in performance. However, by comparing two indexing methods, the use of both words and bi-words generally improved the performance of that obtained with only words, ir- respective of the collection size, topic field used, and relevance assessment type.... In PAGE 5: ... In addition, we used both bi-words and words for indexing, because experimen- tal results in Section 4.1 showed that the use of bi- words for indexing improved the performance of that obtained with only words (see Table2 for details). In cases of speech-driven text retrieval methods, queries dictated by the ten speakers were used inde- pendently, and the final result was obtained by aver- aging results for all the speakers.... ..."
Cited by 1

Table 4 Novel or unexpected PCR products of microsatellite loci

in The Genetic Structure of Recombinant Inbred Mice: High-Resolution Consensus Maps for Complex Trait Analysis
by Robert W. Williams, Jing Gu, Shuhua Qi, Lu Lu

Table 1 In Table 2, comparative results with a gate-level sequential test pattern generator HITEC [1], a genetic test pattern generator GATEST [2] and a novel hierarchical test pattern generation approach published in [3] are given. The comparison is carried out on the example of the GCD circuit, which is the only circuit common with the experiments in [3]. As we can see from the table, the proposed DD-based technique outperforms the other test generation tools at any level. It achieves a higher fault coverage in a much shorter time and generates less test sequences than [3]. The number of test sequences for [1] and [2] is not known.

in A Decision Diagram based Hierarchical Test Pattern Generator
by G. Jervan, A. Markus, J. Raik, R. Ubar
"... In PAGE 4: ... Both of the circuits were tested in less than 20 seconds. Table1 presents the experimental results, which were run on a 233 MHz Pentium II computer with 64 MB RAM under Windows 95 operating system. circuit gcd mult8x8 ... ..."

Table 10: Bi-gram frequency table B3, our own table generated from a corpus ten times the size of that used by B1 and B2, consisting of a mixture of informal and informal English (email and classic novels). Stop-lists were used on common proper nouns. See [MS99] for techniques for sampling data from text corpora.

in Database Group
by Dominic Hughes, James Warren, Orkut Buyukkokten

Table 1. Examples of novel elements identified by PILER searches

in PILER: identification and classification of
by Robert C. Edgar, Eugene W. Myers 2005
"... In PAGE 5: ...us repeats unmasked (Fig. 4). There is some circularity in this measure of success (because PALS generates the input to PILER), and in general the number of masked bases is a ques- tionable measure of a repeat library as functional elements such as paralogs may be false-positive masked. However, we believe that in this case, the observed increase in masking is a strong indication of improved quality [see entries (1) and (2) of Table1 , for examples of novel elements found in this analysis]. 3.... ..."

Table 2. Results for difierent retrieval methods (AP: average precision, WER: word error rate, TER: term error rate).

in Speech-driven text retrieval: Using target IR collections for statistical language model adaptation in speech recognition
by Atsushi Fujii, Katunobu Itou, Tetsuya Ishikawa 2002
"... In PAGE 7: ... The results did not signiflcantly change depending on whether or not we used lower-ranked transcriptions as queries. Table2 shows the non-interpolated average precision values and word error rate in speech recognition, for difierent retrieval methods. As with existing ex- periments for speech recognition, word error rate (WER) is the ratio between the number of word errors (i.... In PAGE 7: ...o query terms (i.e., keywords used for retrieval), which we shall call \term error rate (TER). quot; In Table2 , the flrst line denotes results of the text-to-text retrieval, which were relatively high compared with existing results reported in the NTCIR work- shops [11, 12]. The remaining lines denote results of speech-driven text retrieval combined with the NTCIR-based language model (lines 2-5) and the newspaper-based model (lines 6-9), respectively.... In PAGE 8: ... Figures 3 and 4 show recall-precision curves of difierent retrieval methods, for the NTCIR- 1 and 2 collections, respectively. In these flgures, the relative superiority for precision values due to difierent language models in Table2 was also observable, regardless of the recall. However, the efiectiveness of the on-line adaptation remains an open question and needs to be explored.... ..."
Cited by 6

Table 2. Results for different retrieval methods (AP: average precision, WER: word error rate, TER: term error rate).

in Speech-Driven Text Retrieval: Using Target IR Collections for Statistical Language Model Adaptation in Speech Recognition
by Atsushi Fujii, Katunobu Itou, Tetsuya Ishikawa 2002
"... In PAGE 7: ... The results did not significantly change depending on whether or not we used lower-ranked transcriptions as queries. Table2 shows the non-interpolated average precision values and word error rate in speech recognition, for different retrieval methods. As with existing ex- periments for speech recognition, word error rate (WER) is the ratio between the number of word errors (i.... In PAGE 7: ...o query terms (i.e., keywords used for retrieval), which we shall call term error rate (TER). In Table2 , the first line denotes results of the text-to-text retrieval, which were relatively high compared with existing results reported in the NTCIR work- shops [11, 12]. The remaining lines denote results of speech-driven text retrieval combined with the NTCIR-based language model (lines 2-5) and the newspaper-based model (lines 6-9), respectively.... In PAGE 8: ... Figures 3 and 4 show recall-precision curves of different retrieval methods, for the NTCIR- 1 and 2 collections, respectively. In these figures, the relative superiority for precision values due to different language models in Table2 was also observable, regardless of the recall. However, the effectiveness of the on-line adaptation remains an open question and needs to be explored.... ..."
Cited by 6

Table 4. Initial profile generation technique of the systems

in A Taxonomy of Recommender Agents on the Internet
by Miquel Montaner, Beatriz López, Josep Lluís de la Rosa
"... In PAGE 10: ... The degree of automation in the acquisition of user profiles can range from manual input, to semi-automatic procedures (stereotyping and training sets), to the automatic recognition by the agents themselves. Table4 shows the initial profile generation techniques used by the different systems analyzed. 3.... ..."

Table 2. Stability of Best Solutions Results As a general solution to this, our recent work centres on extending the dis- tributed GA in such a way as to avoid potentially unstable solutions. The method employed here is novel, and exceptionally simple (compared with techniques such as cluster analysis etc), and as such is worth considering for a range of other man- ufacturing stability problems. On evaluating a given solution during the running of the GA, we employ the following technique: { Generate n `manufactured apos; versions of the design, complete with small tol- erances as outlined above. { Cost each of these separately, and of these evaluations consider the worst to be the cost of the solution being tested. The DGA employing this strategy is referred to as a Reevaluating DGA. The re-evaluating idea is based on techniques used in evolutionary robotics [2]. We nd that small values of n are su -

in A Comparison of Search Techniques on a Wing-Box Optimisation Problem
by Malcolm McIlhagga, Phil Husbands, Robert Ives 1996
Cited by 15

Table 6. Number of clusters generated by agglomerative techniques

in A user-centered approach to evaluating topic models
by Diane Kelly, O Diaz, Nicholas J. Belkin, James Allan 2004
Cited by 2
Next 10 →
Results 1 - 10 of 38,959
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University