Results 1 - 10
of
22,627
Table 2: The similarity between the initial query vector and the expanded query vectors (TREC D1 amp; D2; averages over 50 queries)
1996
"... In PAGE 7: ...ne, i.e. 0 Sim(qi; qj) 1. The similarity is 1 for identical query vectors. Table2 shows the similarities between the initial query vector and the expanded query vectors, and Table 3 shows the similarities between the expanded query vectors. We can easily see that di erent relevance feedback methods generate quite di erent new query vectors even though the new query vectors result in similar level of retrieval e ectiveness.... ..."
Cited by 14
Table 1. Vector query results
Table 2: Example Query Feature Vector
2002
"... In PAGE 5: ...here A.a1 = B.b1 where indexes are present for the attributes a75a76a15 and a77a72a15 of tables A and B, respectively. The feature vector for this query is shown in Table2 . Note here that the index ag is set for table A since all attributes related to A in this case, a75a76a15 are accessible through the index.... ..."
Cited by 11
Table 3. Query prototype prediction using query-document-vector closeness.
2000
"... In PAGE 4: ...3.02 vs. 3.03). Table3 shows the performance of the vector-space model controlling for source of identified concepts. Using contextual information increased the number of correctly proposed markup instances seven-fold.... ..."
Cited by 5
Table 6: Correlation between measure of query vector quality and retrieval effectiveness ( S amp;A rel1 relevance judgement )
Cited by 1
Table 5: Number of common documents retrieved by the expanded query vectors (TREC D1 amp; D2; top- ranked 1000 documents are retrieved for 50 queries) Ide Rocchio Pr cl Pr adj
1996
"... In PAGE 8: ... We propose a simpler method than the rank correlation method, which can estimate how two ranked output is correlated. We count the number of common documents retrieved by two expanded query vectors, which is shown in Table5 . The number in parentheses is the rank in decreasing order of the numbers of common documents.... ..."
Cited by 14
Table 1: Non-interpolated average precision, precision at 100 documents and improvement over expansion for routing runs on TREC data. terms quot; and closeness to expanded query vector \for expansion quot;)
"... In PAGE 3: ... Our goals in these experiments were (1) to demonstrate that strong learning methods can per- form better than Rocchio expansion, (2) to nd the most e ective classi cation technique for the routing problem, and (3) to make sure that our comparison between LSI and term-based methods is not based on the idiosyncrasies of a particular learning algorithm. Experimental Results Table1 presents routing results for 5 di erent classi- ers and 4 di erent representations. The representa- tions are: a) Rocchio-expansion Relevance Feedback b) LSI (100 factors from a query-speci c local LSI) c) 200 terms (200 highest ranking terms according to 2-test) d) LSI + terms (100 LSI factors and 200 terms).... In PAGE 4: ... The Friedman test conducts a similar analysis, but it uses only the ranking of the methods within each query. Conclusions From Table1 we can draw the following conclusions: Classi cation vs. Expansion.... ..."
Table 1 (MONKS 1) The number of input vectors queried, rules extracted, and rules remaining after simplification
Table 2 (MONKS 2) The number of input vectors queried, rules extracted, and rules remaining after simplification
Results 1 - 10
of
22,627