Results 11 - 20
of
133,839
Table 3: Lexical Relation Computation rules. A, B = compound words, H = Head, Mod = Modifier.
"... In PAGE 3: ... Lexical Relation Computation Rules The purpose of the lexical relations in Table 1 is to infer lexical relations for MCWs. This is performed through two Lexical Re- lation Computation (LRC) rules ( Table3 ). These rules may be reminiscent of those in [16].... In PAGE 3: ... Given a parsed MCW, its IBR may be use to detect its quasi-synonyms. Moreover, projecting CF features from the CF table (Table 1) onto a compound word, through the control of the LRC rules ( Table3 ) allows to link this word to conceptually neighboring terms. Pseudo-Synonymy Two complex words A and B are quot;pseudo-synonyms quot; when they receive the same definition.... ..."
Table 3. Four rules in a C4.5 tree built on only three top-ranked features
2003
Table 1. The animals domain Consider the data in Table 1, which presents instances of four classes of animals. Traditional attribute-value learners represent each animal and candidate classi- cation rule as a set of attribute-value pairs, such that simple subset test su ces in order to verify if a rule covers an instance. Alternatively, one can store the information available about a single animal in a Prolog knowledge base, encode candidate rules as Prolog queries, and apply the Prolog execution mechanism to verify whether or not a rule covers an instance. If the query succeeds, the
1997
"... In PAGE 3: ... As a consequence, background knowledge can be added in a natural way as Prolog code shared by all instances. For example, with Prolog clause kids love it homeothermic ^ habitat(water) added as back- ground knowledge to Table1 , rule has covering(none)^kids love it exclusively covers instance dolphin. The above alternative and exible Prolog based representation and veri ca- tion method ts in the learning from interpretations paradigm, introduced by [9] and related to other inductive logic programming settings in [7].... In PAGE 3: ...espectively CN2 and C4.5. 2.2 Statistical paradigm: maximum entropy Let us reconsider the data in Table1 , and interpret them as as a sample of an expert biologist apos;s decisions concerning the class of animals. The observed behav- ior of the biologist can be summarized as an empirical conditional distribution ~ p(CjI) that, given the description of an instance in I in the rst ve columns, assigns probability 1 to the class speci ed in the last column, and 0 to all other classes.... In PAGE 5: ... We can obtain this number by taking the sum over all instances PI fj;k(I; Cj). For example, given the data in Table1 , and C1 = fish Q1 = :has legs ^ habitat(water); such that, f1;1(I; C) = 8 lt; :1 if C = fish, and :has legs ^ habitat(water) succeeds in I 0 otherwise we nd that Q1 succeeds in four instances. Accordingly, the set of (instance; class) tuples for which f1;1 evaluates to one is f(dolphin; fish); (trout; fish); (shark; fish); (herring; fish)g: All other combinations evaluate to zero, either because the class is not fish, or because query :has legs ^ habitat(water) does not succeed in the instance.... In PAGE 5: ... For the expected value of fj;k(I; C) in empirical distribution ~ p, and in target distribution p this yields the equations ~ p(fj;k) 1 N XI ~ p(CjjI)fj;k(I; Cj) (5) p(fj;k) 1 N XI p(CjjI)fj;k(I; Cj) (6) For example, reconsider the uniform conditional distribution pmfbr introduced above. We know that pmfbr(fishjI) = 0:25 for all ten instances in Table1 . As f1;1(I; fish) evaluates to one for the four instances dolphin, trout, shark, her- ring, and zero for all other instances, the expected value pmfbr(f1;1) of indicator function f1;1(I; C) in model pmfbr is pmfbr(f1;1) = 1... In PAGE 7: ... The graph of these two log-likelihood functions is plotted in Figure 1. On the X axis of this graph, we nd the values, on the Y axis the log-likelihood of the Table1 training set. The graph shows that with CC = fCC1;1g, the optimal parameter set f 1g is obtained with a positive value close to two for 1.... In PAGE 8: ...CC2g we only reach -1.38. Therefore, of these two sets, fCC1g is to be preferred. In Table 2 the conditional probabilities are shown for the two models pfCC1g and pfCC2g. Notice pfCC1g is indeed closer to Table1 than pfCC2g. 4 The Maccent Algorithm In this section we present the learning system Maccent which addresses the task of MAximum ENTropy modeling with Clausal Constraints.... In PAGE 11: ... If Beam is not empty, go to step 3 Given a beam size BS, the default strategy for pruning Beam is to rank the clausal constraints cc0 in Beam according to their Gain and keep at most BS of the best scoring constraints. However, in cases where the input empirical distribution assigns only probabilities 1 and 0, as for instance in Table1 , we can apply a more sophisticated heuristic. Thereto, we de ne the notion of potential approximate gain as an upper bound to the maximumGain that can be obtained via further specialisation of cc0.... In PAGE 12: ... First, we round o the running example with the application of Maccent to the multi-class animals domain. With the hold-out stopping criterion suppressed, Maccent found 13 constraints before the log-likelihood of Table1 was reduced to 0. The components Cj and Qk of the indicator functions fj;k on which these clausal constraints are based are shown, in order of discovery, in Table 3.... ..."
Cited by 29
Table 1. Type rules for A .
2002
"... In PAGE 6: ...The type rules are shown in Table1 . Rules NIL and MSG are obvious.... ..."
Cited by 6
Table 2. Reduction rules for A .
2002
"... In PAGE 7: ... Reduction Semantics Reduction semantics of A is the same as that of -calculus with mismatch. It is de ned in terms of the usual structural congruence over preterms and reduction rules shown in De nition 3 and Table2 . We use =) to denote the re exive transitive closure of !.... ..."
Cited by 6
TABLE 9. Enumeration of all the Possible Evaluation Results for a Five-Rule Policy Example rule A rule B rule C rule D rule E
Table 2: Comparison of execution times
"... In PAGE 16: ... The results indicate that the naive algorithm solutions are good but not necessarily optimum. Table2 compares running speeds of these two algorithms. 8 Conclusions This paper has addressed the problem of allocating rules in a cooperative deductive database system, where knowledge base (a collection of rules) and database are shared across autonomous... ..."
Results 11 - 20
of
133,839