Results 1 
2 of
2
Induction in first order logic from noisy training examples and fixed example set size
 In PhD Thesis
, 1999
"... Abstract This dissertation investigates the field of inductive logic programming (ILP) and in so doing an ILP system, Lime, is designed and developed. Lime addresses the problem of noisy training examples; learning from only positive, only negative, or both positive and negative examples; efficientl ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Abstract This dissertation investigates the field of inductive logic programming (ILP) and in so doing an ILP system, Lime, is designed and developed. Lime addresses the problem of noisy training examples; learning from only positive, only negative, or both positive and negative examples; efficiently biasing and searching the hypothesis space; and handling recursion efficiently and effectively. The Qheuristic is introduced to address the problem of learning with both noisy training examples and fixed numbers of positive and negative training examples. This heuristics is based on Bayes rule. Both a justification of its derivation and a description of the context in which it is appropriately applied are given. Because of the general nature of this heuristic its application is not restricted to ILP. Instead of employing a greedy covering approach to constructing clauses, Lime employs the Qheuristic to evaluate entire logic programs as hypotheses. To tame the inevitable explosion in the search space, the notion of a simple clause is introduced. These sets of literals may be viewed as subparts of clauses that are effectively independent in terms of variables used. Instead of growing a clause one literal at a time, Lime efficiently combines simple clauses to construct a set of gainful candidate clauses. Subsets of these candidate clauses are evaluated using the Qheuristic to find the final hypothesis. Details of the algorithms and data structures of Lime are discussed. Lime's handling of recursive logic programs is also described. Experimental results are provided to illustrate how Lime achieves its design goals of better noise handling, learning from a fixed set of examples (e.g., from only positive data), and of learning recursive logic programs. These results compare the performance of Lime with other leading ILP systems like Foil and Progol in a variety of domains. Empirical results with a boosted version of Lime are also reported.
unknown title
"... They first noticed that Gold’s limiting recursive functions which was originally introduced to formulate the learning processes of machines, serve as approximation algorithms. Here, Gold’s limiting recursive function is of the form $f(x) $ such that $f(x)=y \Leftrightarrow\exists t_{0}\forall t>t_{0 ..."
Abstract
 Add to MetaCart
They first noticed that Gold’s limiting recursive functions which was originally introduced to formulate the learning processes of machines, serve as approximation algorithms. Here, Gold’s limiting recursive function is of the form $f(x) $ such that $f(x)=y \Leftrightarrow\exists t_{0}\forall t>t_{0}.g(t,x)=y\Leftrightarrow\lim_{t}g(t, x)=y$, $t $ where $g(t, x) $ is called a guessing function, and is a limit variable. Then, they proved that some limiting recursive functions approximate arealizer of a semiclassical principle $\neg\neg\exists y\forall x.g(x, y)=0arrow\exists y\forall x.g(x, y)=0$. Also, they showed impressive usages of the semiclassical principle for mathematics and for software synthesis. In this way, NakataHayashi opened up the possibility that limiting operations provide readability interpretation of semiclassical logical systems. They formulated the set of the limiting recursive functions as a Basic Recursive hnction Theory(brft, for short. Wagner[19] and Strong[16]). Then NakataHayashi carried out their readability interpretation using the BRFT.