### Table 1: Famous persons and their corresponding homo- graphic place names.

2006

"... In PAGE 2: ... 3.1 Person Name Ambiguity Table1 lists known persons where the last name and the first name are also cities somewhere in the world. apos;Javier apos; being a city in Spain and apos;Solana apos; a city in the Philippines, a place name lookup in a sentence containing apos;EU-general secretary Javier Solana apos; would recognise these two cities.... ..."

Cited by 2

### Table 3. Readers/writers-examples. 6 Conclusion We have introduced a deadlock detection method based on net unfoldings us- ing linear algebraic techniques. Moreover, we have presented an implementation of McMillan apos;s deadlock algorithm and we pointed out the performance gap be- tween McMillan apos;s LISP implementation and our optimized C version. By means of several examples we have pointed out the strong and weak aspects of both approaches. The results show that the larger the percentage of cut-o events is, the more likely the new method will yield better performance than McMillan apos;s. Our future work is to exploit some more CPLEX heuristic in order to speed up our implementation. Acknowledgements. We thank Javier Esparza for drawing our attention to this problem and Ken McMillan for sending us his LISP sources of the DME generator.

1997

"... In PAGE 10: ...resented. We modelled a 4-bit implementationbased on busy waiting semaphors. We used our methods to check deadlock freeness for a setting with one writer and two or three readers (SYNC). The results are depicted in Table3 . In con- trast to the DME example we see that the application of the linear algebraic approach turns out to yield better results if the percentage of cut-o events is... ..."

Cited by 44

### Table 1. Performance evaluation

"... In PAGE 5: ...ketchpad.(b) Input some graphics. In order to evaluate the system, we asked 5 different users to drawn three classes of closed shapes using the three shape-recognition approaches. The correct recognition rates are listed in Table1 . Our recognition precision by rule-based approach is very similar to others (Fonseca and Jorge 2000).... ..."

### Table 4. The number of triangulations of C(n; d) for n 12.

"... In PAGE 18: ...10, x9.6] for a de nition). Any triangulation of C(n; d) which is regular for some choice of points on the moment curve is automatically a lifting triangulation, but Rambau apos;s examples show that the converse does not hold. We close our discussion by presenting in Table4 the numbers of triangulations of cyclic polytopes known to us. Those marked with * have been computed by Jorg Rambau.... ..."

### Table 4. The number of triangulations of C(n; d) for n 12.

"... In PAGE 18: ... Any triangulation of C(n; d) which is regular for some choice of points on the moment curve is automatically a lifting triangulation, but these examples show that the converse does not hold. We close our discussion by presenting in Table4 the numbers of triangulations of cyclic polytopes known to us. Those marked with * have been computed by Jorg Rambau.... ..."

### Table 4. The number of triangulations of C(n; d) for n 12.

"... In PAGE 18: ... Any triangulation of C(n; d) which is regular for some choice of points on the moment curve is automatically a lifting triangulation, but these examples show that the converse does not hold. We close our discussion by presenting in Table4 the numbers of triangulations of cyclic polytopes known to us. Those marked with * have been computed by Jorg Rambau.... ..."

### Table 1. Performance evaluation Approach Shape Total# Correct# Accuracy%

"... In PAGE 5: ...ketchpad.(b) Input some graphics. In order to evaluate the system, we asked 5 different users to drawn three classes of closed shapes using the three shape-recognition approaches. The correct recognition rates are listed in Table1 . Our recognition precision by rule-based approach is very similar to others (Fonseca and Jorge 2000).... ..."

### Table 2 shows the benchmarks for each dataset, using the three measures just defined. The new algorithm when only using VSM-based similarity (VSMOnly) outperforms the existing algorithm (Baseline) by 5%. The new algorithm using the full context similarity measures including IE features (Full) significantly outperforms the existing algorithm (Baseline) in every test: the overall F-

"... In PAGE 7: ... Constructed Testing Corpus I # of Mentions Name Set 1a Set 1b Mikhail S. Gorbachev 20 50 Dick Cheney 20 10 Dalai Lama 20 10 Bill Clinton 20 10 Set 2a Set 2b Bob Dole 20 50 Hun Sen 20 10 Javier Perez de Cuellar 20 10 Kim Young Sam 20 10 Set 3a Set 3b Jiang Qing 20 10 Ingrid Bergman 20 10 Margaret Thatcher 20 50 Aung San Suu Kyi 20 10 Set 4a Set 4b Bill Gates 20 10 Jiang Zemin 20 10 Boris Yeltsin 20 50 Kim Il Sung 20 10 Table2 . Testing Corpus I Benchmarking ... ..."

### Table 1. Success rates over the test with Lusa corpus

"... In PAGE 5: ... Weights of features were manually set. The best result was not better then previous results obtained with rules only ( Table1 and Figure 1). Besides, setting the appropriate weight to each position in the window is a difficult task.... In PAGE 9: ... In this experimental framework, tagging words of the test set is a hard task since approximately 30% of the words do not occur in the training set. We now give some details about the synthesis of the theory associated with the result shown in Table1 . In the first four iterations a large number (more than 350) of rules are induced.... In PAGE 9: ... Table1 shows the overall success rates obtained by using iterative induction with each one of three different algorithms in the last iteration (it. 5).... In PAGE 10: ...lose to zero. This strategy yielded 5 iterations. In the case of RC2, the untagged words at iteration 5 (about 400) were stored in a case-base and used to construct explanations (about 1300). The result shown for CBR in Table1 was the best one achieved using a simple overlapping metric, and setting manually the weights. 9 Related work The system SKILit (Jorge amp; Brazdil 1996, Jorge 1998) used the technique of iterative induction to synthesize recursive logic programs from sparse sets of examples.... ..."

Cited by 1

### Table 1. Success rates over the test with Lusa corpus

"... In PAGE 5: ... Weights of features were manually set. The best result was not better then previous results obtained with rules only ( Table1 and Figure 1). Besides, setting the appropriate weight to each position in the window is a difficult task.... In PAGE 9: ... In this experimental framework, tagging words of the test set is a hard task since approximately 30% of the words do not occur in the training set. We now give some details about the synthesis of the theory associated with the result shown in Table1 . In the first four iterations a large number (more than 350) of rules are induced.... In PAGE 9: ... Table1 shows the overall success rates obtained by using iterative induction with each one of three different algorithms in the last iteration (it. 5).... In PAGE 10: ...lose to zero. This strategy yielded 5 iterations. In the case of RC2, the untagged words at iteration 5 (about 400) were stored in a case-base and used to construct explanations (about 1300). The result shown for CBR in Table1 was the best one achieved using a simple overlapping metric, and setting manually the weights. 9 Related work The system SKILit (Jorge amp; Brazdil 1996, Jorge 1998) used the technique of iterative induction to synthesize recursive logic programs from sparse sets of examples.... ..."

Cited by 1