Results 1 - 10
of
6,838
Table 3: Inclusion inference.
"... In PAGE 4: ...meronym, partonym and membership, and an approximation of inclusion based on overlap computation and context analy- sis (see Table3 ). Contextual inclusion is computed by check- ing the overlap between the context feature (see Section 3.... ..."
Table 1 Some inferred functions
2006
"... In PAGE 68: ... Reflection means that Maude programs can deal with Maude programs as data. In Table1 we have listed experimental results for sample problems. The first column lists the names for the induced functions, the second the number of given I/O-examples, the third the total number of induced rules and in parentheses the number of induced recursive rules, the fourth the maximal number of recursive calls within one rule, the fifth the number of recursion parameters, and the sixth the times in seconds consumed by the synthesis.... ..."
Table Contextual
2005
Cited by 1
Table 10: Evaluating contextual dependency of paraphrases by latent variable models model window independent dependent corrected
"... In PAGE 6: ... Table 9: Potential upper bound of this method human judgement human judgement from paraphrasing based on topic perspective same different independent 61 10 dependent 15 22 We prepared several latent variable models to investigate the performance of the proposed method and applied it to the sampled paraphras- ing sentences mentioned above. Table10 shows the evaluation results. 5 Discussion First, there is no major performance difference between pLSI and LDA in paraphrasing evalu- ation.... In PAGE 7: ... In addition, Table 8 reveals that judging the contextual dependency of paraphrasing pairs does not require fine-grained topics. From the results shown in Table10 , we can conclude that topic inference by latent variable models resembles context judgement by humans as recorded in error rate. However, we note that the error rate was not weighted for contextually independent or dependent results.... In PAGE 7: ... In our experiments, from the results shown in Table 9, C is set to 25. From the results shown in Table10 , we can conclude that the performance of our method is almost the same as that by the manually annotated topics, and the accuracy of our method is almost 80% for paraphrasing pairs that can be judged by contextual information. There are several possibilities for improving accuracy.... ..."
Table 10: Evaluating contextual dependency of paraphrases by latent variable models model window independent dependent corrected
"... In PAGE 6: ... Table 9: Potential upper bound of this method human judgement human judgement from paraphrasing based on topic perspective same different independent 61 10 dependent 15 22 We prepared several latent variable models to investigate the performance of the proposed method and applied it to the sampled paraphras- ing sentences mentioned above. Table10 shows the evaluation results. 5 Discussion First, there is no major performance difference between pLSI and LDA in paraphrasing evalu- ation.... In PAGE 7: ... In addition, Table 8 reveals that judging the contextual dependency of paraphrasing pairs does not require ne-grained topics. From the results shown in Table10 , we can conclude that topic inference by latent variable models resembles context judgement by humans as recorded in error rate. However, we note that the error rate was not weighted for contextually independent or dependent results.... In PAGE 7: ... In our experiments, from the results shown in Table 9, C is set to 25. From the results shown in Table10 , we can conclude that the performance of our method is almost the same as that by the manually annotated topics, and the accuracy of our method is almost 80% for paraphrasing pairs that can be judged by contextual information. There are several possibilities for improving accuracy.... ..."
Table 3. Evaluation of contextual mappings
"... In PAGE 11: ...irst experiment. Based on our environment (Intel Pentium IV 2.8GHz processor, 512MB memory, Windows XP Professional, and Java SE 6), it takes about 5 seconds to complete all the five tests (including the parsing time). In the second experiment, the contextual mappings constructed by our al- gorithm are evaluated by experienced volunteers, and the results are exhibited in Table3 . Marson constructs some interesting contextual mappings.... ..."
Table 3. Evaluation of contextual mappings
"... In PAGE 11: ...lrst experiment. Based on our environment (Intel Pentium IV 2.8GHz processor, 512MB memory, Windows XP Professional, and Java SE 6), it takes about 5 seconds to complete all the flve tests (including the parsing time). In the second experiment, the contextual mappings constructed by our al- gorithm are evaluated by experienced volunteers, and the results are exhibited in Table3 . Marson constructs some interesting contextual mappings.... ..."
Table 5. Contextual dependencies (transitivity)
2004
"... In PAGE 5: ...ents in the entire data (18.8%). This last result would arguably be quite different with more quar- relsome meeting participants. Table5 represents results concerning the fourth pragmatic assumption. While none of the results characterize any strong conditioning of CR CZ by CR CX and CR CY , we can nevertheless notice some interest- ing phenomena.... ..."
Cited by 15
Table 5. Contextual dependencies (transitivity)
2004
"... In PAGE 5: ...ents in the entire data (18.8%). This last result would arguably be quite different with more quar- relsome meeting participants. Table5 represents results concerning the fourth pragmatic assumption. While none of the results characterize any strong conditioning of CRCZ by CRCX and CRCY, we can nevertheless notice some interest- ing phenomena.... ..."
Cited by 15
Results 1 - 10
of
6,838