Results 1 - 10
of
105,389
Table 5. Sample Clause with 20% Recall and 94% Precision on Without Co-reference Training Set
2005
"... In PAGE 5: ... Tables 3 and 4 show the results of Gleaner on the testset data for all four combinations, using the restriction that the same word cannot be both agent and target in a relation4. A sample clause learned by Aleph can be found in Table5 . This clause has focused on the common property that agents are before targets, agents are nouns with internal capital 4For our challenge-task submission, we used all 936 pos- sible test examples.... ..."
Cited by 3
Table 5. Sample Clause with 20% Recall and 94% Precision on Without Co-reference Training Set
"... In PAGE 48: ... Dependencies refer to di erent possibilities for agent/target relations; A: rst protein in the sentences, B: second, C:third. Pattern Support Dependencies PTN IVERB PTN 1405 A!B, B!A, A$B PTN IVERB DT PTN 258 A!B, A$B PTN IVERB IN PTN 173 A!B, B!A PTN INOUN PTN 138 A!B, B!A PTN INOUN IN PTN 116 A!B, B!A PTN IVERB IN DT PTN 46 A!B PTN RB IVERB PTN 45 A!B, A$B INOUN IN PTN IN PTN 35 A!B, B!A PTN IVERB NN PTN 35 A!B PTN IVERB PTN PTN 30 A!B, A!C Table5 . Performance on di erent types of interactions; all without linguistic information, without co-refs.... In PAGE 71: ... Tables 3 and 4 show the results of Gleaner on the testset data for all four combinations, using the restriction that the same word cannot be both agent and target in a relation4. A sample clause learned by Aleph can be found in Table5 . This clause has focused on the common property that agents are before targets, agents are nouns with internal capital 4For our challenge-task submission, we used all 936 pos- sible test examples.... ..."
Table 4. Results of Gleaner, Aleph theory, and baseline all-positive prediction on LLL challenge task with co- reference.
"... In PAGE 48: ... We added two names (lacZ and orf10), because they occurred in the corpus, but not in the dictionary. Table4 . Patterns extracted from our corpus; table shows only patterns with a support of 30.... In PAGE 48: ... action bind regulon nothing all TP 19 7 2 0 28 FN 17 (36) 5 (12) 2 (4) 0 (0) 24 (52) FP 10 (29) 4 (11) 1 (3) 13 (13) 28 (56) 3. Results and Discussion From our corpus of 1000 sentences, we were able to ex- tract 148 patterns (see Table4 ). The pattern with the highest support was \PTN IVERB PTN quot; { 1405 dif- ferent alignments produced this sequence.... In PAGE 48: ... These numbers included multiple optimal alignments for pairing two sentences. Table4 shows the ten patterns with the highest support in the train- ing data. We decided to send the prediction (supposedly) having the highest F1-measure.... In PAGE 54: ...he overall F-measure from 14.8% to 17.5%. The same effect can be seen if we consider the performance of the systems over the three interaction types; action, bind and regulon. The system trained using just the basic data finds 6 correct interactions 5 of which are actions and 1 a binding interaction (see Table4 for a full breakdown of the results for all three submis- sions). The system fails to find any regulon family interactions.... In PAGE 55: ... Evaluation results of our three submissions. All Interactions Action Bind Regulon No Interaction System C M S C M S C M S C M S C M S Baseline 53 1 447 35 1 95 14 0 46 4 0 6 0 0 300 LLL-05 Basic 6 48 21 5 31 7 1 13 2 0 4 0 0 0 12 LLL-05 Expanded 8 46 29 7 29 11 1 13 2 0 4 0 0 0 16 Table4 . Breakdown of the official evaluation results including results for individual interaction types (columns represent Correct, Missing, and Spurious).... ..."
Table 7: A composition rule for co-reference res- olution
"... In PAGE 5: ... If so, the proper name is also tagged with that more speci c category. Table7 shows... ..."
Table 3. Results of Gleaner, Aleph theory, and baseline all-positive prediction on LLL challenge task without co- reference.
"... In PAGE 14: ... Table3 . Comparison of generalisation performances of the mbl classifier predicting class trigrams, and each of the prob- abilistic methods.... In PAGE 14: ... A slight modification of the Viterbi algorithm is used to determine the opti- mal path through the state machine given the input sequence. The fourth column of Table3 shows the performance of memm applied to the med and genia tasks. On both tasks, memm is outperformed by the mbl classifier.... In PAGE 14: ... As aresult,crfs tend to be less biased towards states with few successor states than cmmsandmemms. On both sample tasks, crf attains the highest scores of all three probabilistic methods tested; the last column in Table3 shows the scores. Compared with mbl, crf performs considerably worse on med, but this order is reversed on genia,wherecrf attains the best score.... In PAGE 41: ... F 5. 14,0 82,7 24,0 14,0 93,1 24,4 Table3 . Results on the test set with and without coreferences Gr.... In PAGE 41: ... 51,8 16,8 25,4 6. 55,6 53,0 54,3 Table 2 presents the results as obtained on the test data with coreferences while Table3 presents the results as obtained on the union of the test data with, and without coreferences. As shown by Table 2, the F-measure of Group 5 on the basic and linguistically enriched data set is not significantly different, as it is the case in Table 1.... In PAGE 48: ... Table3 . Slight modi cations of the dictionaries and data sets.... In PAGE 48: ... In addition, we altered some entries in the dictionary to deal with these problems. In general, in the re ned dictionary, canonical forms did not have blanks or symbols (see Table3 ). We added two names (lacZ and orf10), because they occurred in the corpus, but not in the dictionary.... In PAGE 55: ...8% (8/54) 17.5% Table3 . Evaluation results of our three submissions.... In PAGE 78: ... 4Semantically, this is indeed the correct result. Table3 . Results on the test data with coreferences using the clause sets 1, 2 and 5 from Table 1 Clause set No.... In PAGE 78: ... Thus, for testing and training on data with coreferences5 we ex- pected better results with semantic chains compared to syntactic chains. However, even in this case syn- tactic chains outperform semantic chains, as shown in Table3 . This can be explained partly by the fact that many coreferences were appositions which the parser could extract.... ..."
Table 5.3 Results of Gleaner, Aleph theory, and baseline all-positive prediction on the protein-target task with co-reference.
2007
Table 5.2 Results of Gleaner, Aleph theory, and baseline all-positive prediction on the protein-target task without co-reference.
2007
Table 5: The various types of de#0Cnite noun phrases that do not co-refer with
1994
"... In PAGE 9: ... There are several di#0Berent categories of de#0Cnite noun phrases that do not co-refer with previous phrases in the text. Table5 shows some of these categories, and how the non-co-referring de#0Cnite noun phrases in our texts divide among them. #28There may be other categories as well, but these are all the ones we found.... ..."
Table 4. Results of Gleaner, Aleph theory, and baseline all-positive prediction on LLL challenge task with co- reference.
2005
Cited by 3
Results 1 - 10
of
105,389