Results 1  10
of
13
Lime: A System for Learning Relations
 In The 9th International Workshop on Algorithmic Learning Theory
, 1998
"... . This paper describes the design of the inductive logic programming system Lime. Instead of employing a greedy covering approach to constructing clauses, Lime employs a Bayesian heuristic to evaluate logic programs as hypotheses. The notion of a simple clause is introduced. These sets of literals m ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
. This paper describes the design of the inductive logic programming system Lime. Instead of employing a greedy covering approach to constructing clauses, Lime employs a Bayesian heuristic to evaluate logic programs as hypotheses. The notion of a simple clause is introduced. These sets of literals may be viewed as subparts of clauses that are effectively independent in terms of variables used. Instead of growing a clause one literal at a time, Lime efficiently combines simple clauses to construct a set of gainful candidate clauses. Subsets of these candidate clauses are evaluated via the Bayesian heuristic to find the final hypothesis. Details of the algorithms and data structures of Lime are discussed. Lime's handling of recursive logic programs is also described. Experimental results to illustrate how Lime achieves its design goals of better noise handling, learning from fixed set of examples (and from only positive data), and of learning recursive logic programs are provided. Expe...
Induction in first order logic from noisy training examples and fixed example set size
 In PhD Thesis
, 1999
"... Abstract This dissertation investigates the field of inductive logic programming (ILP) and in so doing an ILP system, Lime, is designed and developed. Lime addresses the problem of noisy training examples; learning from only positive, only negative, or both positive and negative examples; efficientl ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Abstract This dissertation investigates the field of inductive logic programming (ILP) and in so doing an ILP system, Lime, is designed and developed. Lime addresses the problem of noisy training examples; learning from only positive, only negative, or both positive and negative examples; efficiently biasing and searching the hypothesis space; and handling recursion efficiently and effectively. The Qheuristic is introduced to address the problem of learning with both noisy training examples and fixed numbers of positive and negative training examples. This heuristics is based on Bayes rule. Both a justification of its derivation and a description of the context in which it is appropriately applied are given. Because of the general nature of this heuristic its application is not restricted to ILP. Instead of employing a greedy covering approach to constructing clauses, Lime employs the Qheuristic to evaluate entire logic programs as hypotheses. To tame the inevitable explosion in the search space, the notion of a simple clause is introduced. These sets of literals may be viewed as subparts of clauses that are effectively independent in terms of variables used. Instead of growing a clause one literal at a time, Lime efficiently combines simple clauses to construct a set of gainful candidate clauses. Subsets of these candidate clauses are evaluated using the Qheuristic to find the final hypothesis. Details of the algorithms and data structures of Lime are discussed. Lime's handling of recursive logic programs is also described. Experimental results are provided to illustrate how Lime achieves its design goals of better noise handling, learning from a fixed set of examples (e.g., from only positive data), and of learning recursive logic programs. These results compare the performance of Lime with other leading ILP systems like Foil and Progol in a variety of domains. Empirical results with a boosted version of Lime are also reported.
ILP with Noise and Fixed Example Size: A Bayesian Approach
 In Fifteenth International Joint Conference on Artificial Intelligence
, 1997
"... Current inductive logic programming systems are limited in their handling of noise, as they employ a greedy covering approach to constructing the hypothesis one clause at a time. This approach also causes difficulty in learning recursive predicates. Additionally, many current systems have an implici ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Current inductive logic programming systems are limited in their handling of noise, as they employ a greedy covering approach to constructing the hypothesis one clause at a time. This approach also causes difficulty in learning recursive predicates. Additionally, many current systems have an implicit expectation that the cardinality of the positive and negative examples reflect the "proportion" of the concept to the instance space. A framework for learning from noisy data and fixed example size is presented. A Bayesian heuristic for finding the most probable hypothesis in this general framework is derived. This approach evaluates a hypothesis as a whole rather than one clause at a time. The heuristic, which has nice theoretical properties, is incorporated in an ILP system, Lime. Experimental results show that Lime handles noise better than FOIL and PROGOL. It is able to learn recursive definitions from noisy data on which other systems do not perform well. Lime is also capable of learn...
Converting Semantic MetaKnowledge into Inductive Bias
 In Proceedings of the 15th International Conference on Inductive Logic Programming
, 2005
"... Abstract. The Cyc KB has a rich preexisting ontology for representing common sense knowledge. To clarify and enforce its terms ’ semantics and to improve inferential efficiency, the Cyc ontology contains substantial metalevel knowledge that provides definitional information about its terms, such a ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Abstract. The Cyc KB has a rich preexisting ontology for representing common sense knowledge. To clarify and enforce its terms ’ semantics and to improve inferential efficiency, the Cyc ontology contains substantial metalevel knowledge that provides definitional information about its terms, such as a type hierarchy. This paper introduces a method for converting that metaknowledge into biases for ILP systems. The process has three stages. First, a “focal position ” for the target predicate is selected, based on the induction goal. Second, the system determines type compatibility or conflicts among predicate argument positions, and creates a compact, efficient representation that allows for syntactic processing. Finally, mode declarations are generated, taking advantage of information generated during the first and second phases. 1
Connecting Sumatra to Aleph
"... Sumatra TT is a universal data preprocessing tool allowing to process data stored in various types of data sources (e.g. plain text, SQL, etc). We enriched the set of accessible data sources by Prolog databases. Field conversions to Prologspecific syntax such as structured terms and atomized strin ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Sumatra TT is a universal data preprocessing tool allowing to process data stored in various types of data sources (e.g. plain text, SQL, etc). We enriched the set of accessible data sources by Prolog databases. Field conversions to Prologspecific syntax such as structured terms and atomized strings are handled on the Sumatrainterpreter level.
Connecting Sumatra to Aleph and Other ILP Systems
"... Sumatra TT is a universal data preprocessing tool allowing to process data stored in various types of data sources (e.g. plain text, SQL, etc). We enriched the set of accessible data sources by Prolog databases. Field conversions to Prologspeci c syntax such as structured terms and atomized s ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Sumatra TT is a universal data preprocessing tool allowing to process data stored in various types of data sources (e.g. plain text, SQL, etc). We enriched the set of accessible data sources by Prolog databases. Field conversions to Prologspeci c syntax such as structured terms and atomized strings are handled on the Sumatrainterpreter level. Moreover,
Automatic Induction of Domainrelated Information: Learning Descriptors Type Domains
"... Learning in complex contexts often requires pure induction to be supported by various kinds of metainformation. Providing such information is a critical, difficult and errorprone activity. This paper proposes an algorithm to automatically identify types from observations, and studies its performan ..."
Abstract
 Add to MetaCart
Learning in complex contexts often requires pure induction to be supported by various kinds of metainformation. Providing such information is a critical, difficult and errorprone activity. This paper proposes an algorithm to automatically identify types from observations, and studies its performance and robustness.
Extracting Constraints for Process Modeling Will Bridewell Computational Learning
"... In this paper, we introduce an approach for extracting constraints on process model construction. We begin by clarifying the type of knowledge produced by our method and how one may apply it. Next, we review the task of inductive process modeling, which provides the required data. We then introduce ..."
Abstract
 Add to MetaCart
In this paper, we introduce an approach for extracting constraints on process model construction. We begin by clarifying the type of knowledge produced by our method and how one may apply it. Next, we review the task of inductive process modeling, which provides the required data. We then introduce a logical formalism and a computational method for acquiring scientific knowledge from candidate process models. Results suggest that the learned constraints make sense ecologically and may provide insight into the nature of the modeled domain. We conclude the paper by discussing related and future work.
Induction as a Search Procedure
"... 1 Induction as a Search Procedure This chapter introduces Inductive Logic Programming from the perspective of search algorithms in Computer Science. It first briefly considers the Version Spaces approach to induction, and then focuses on Inductive Logic Programming: from its formal definition and ..."
Abstract
 Add to MetaCart
1 Induction as a Search Procedure This chapter introduces Inductive Logic Programming from the perspective of search algorithms in Computer Science. It first briefly considers the Version Spaces approach to induction, and then focuses on Inductive Logic Programming: from its formal definition and main techniques and strategies, to priors used to restrict the search space and optimized sequential, parallel, and stochastic algorithms. The authors hope that this presentation of the theory and applications of Inductive Logic Programming will help the reader understand the theoretical underpinnings of inductive reasoning, and also provide a helpful overview of the StateoftheArt in the domain. 1
Learning Declarative Bias Will Bridewell 1 and Ljupčo Todorovski 1,2
"... Abstract. In this paper, we introduce an inductive logic programming approach to learning declarative bias. The target learning task is inductive process modeling, which we briefly review. Next we discuss our approach to bias induction while emphasizing predicates that characterize the knowledge and ..."
Abstract
 Add to MetaCart
Abstract. In this paper, we introduce an inductive logic programming approach to learning declarative bias. The target learning task is inductive process modeling, which we briefly review. Next we discuss our approach to bias induction while emphasizing predicates that characterize the knowledge and models associated with the HIPM system. We then evaluate how the learned bias affects the space of model structures that HIPM considers and how well it generalizes to other search problems in the same domain. Results indicate that the bias reduces the size of the search space without removing the most accurate structures. In addition, our approach reconstructs known constraints in population dynamics. We conclude the paper by discussing a generalization of the technique to learning bias for inductive logic programming and by noting directions for future work. Key words: inductive process modeling, metalearning, transfer learning 1