Results 1  10
of
16
First order jkclausal theories are PAClearnable
 Artificial Intelligence
, 1994
"... We present positive PAClearning results for the nonmonotonic inductive logic programming setting. In particular, we show that first order rangerestricted clausal theories that consist of clauses with up to k literals of size at most j each are polynomialsample polynomialtime PAClearnable with on ..."
Abstract

Cited by 64 (27 self)
 Add to MetaCart
We present positive PAClearning results for the nonmonotonic inductive logic programming setting. In particular, we show that first order rangerestricted clausal theories that consist of clauses with up to k literals of size at most j each are polynomialsample polynomialtime PAClearnable with onesided error from positive examples only. In our framework, concepts are clausal theories and examples are finite interpretations. We discuss the problems encountered when learning theories which only have infinite nontrivial models and propose a way to avoid these problems using a representation change called flattening. Finally, we compare our results to PAClearnability results for the normal inductive logic programming setting. 1
Relational Learning via Propositional Algorithms: An Information Extraction Case Study
, 2001
"... This paper develops a new paradigm for relational learning which allows for the representation and learning of relational information using propositional means. This paradigm suggests different tradeoffs than those in the traditional approach to this problem  the ILP approach  and as a resu ..."
Abstract

Cited by 42 (12 self)
 Add to MetaCart
This paper develops a new paradigm for relational learning which allows for the representation and learning of relational information using propositional means. This paradigm suggests different tradeoffs than those in the traditional approach to this problem  the ILP approach  and as a result it enjoys several significant advantages over it. In particular, the new paradigm is more flexible and allows the use of any propositional algorithm, including probabilistic algorithms, within it. We evaluate the new approach on an important and relationintensive task  Information Extraction  and show that it outperforms existing methods while being orders of magnitude more efficient. 1
Robust Logics
"... Suppose that we wish to learn from examples and counterexamples a criterion for recognizing whether an assembly of wooden blocks constitutes an arch. Suppose also that we have preprogrammed recognizers for various relationships e.g. ontopof(x; y), above(x; y), etc. and believe that some possibl ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
Suppose that we wish to learn from examples and counterexamples a criterion for recognizing whether an assembly of wooden blocks constitutes an arch. Suppose also that we have preprogrammed recognizers for various relationships e.g. ontopof(x; y), above(x; y), etc. and believe that some possibly complex expression in terms of these base relationships should suffice to approximate the desired notion of an arch. How can we formulate such a relational learning problem so as to exploit the benefits that are demonstrably available in propositional learning, such as attributeefficient learning by linear separators, and errorresilient learning? We believe that learning in a general setting that allows for multiple objects and relations in this way is a fundamental key to resolving the following dilemma that arises in the design of intelligent systems: Mathematical logic is an attractive language of description because it has clear semantics and sound proof procedures. However, as a basis for large programmed systems it leads to brittleness because, in practice, consistent usage of the various predicate names throughout a system cannot be guaranteed, except in application areas such as mathematics where the viability of the axiomatic method has been demonstrated independently. In this paper we develop the following approach to circumventing this dilemma. We suggest that brittleness can be overcome by using a new kind of logic in which each statement is learnable. By allowing the system to learn rules empirically from the environment, relative to any particular programs it may have for recognizing some base predicates, we enable the system to acquire a set of statements approximately consistent with each other and with the world, without the need for a globally knowledgeable and consistent programmer. We illustrate
Relational Learning for NLP using Linear Threshold Elements
, 1999
"... We describe a coherent view of learning and reasoning with relational representations in the context of natural language processing. In particular, we discuss the Neuroidal Architecture, Inductive Logic Programming and the SNoW system explaining the relationships among these, and thereby oer an expl ..."
Abstract

Cited by 28 (12 self)
 Add to MetaCart
We describe a coherent view of learning and reasoning with relational representations in the context of natural language processing. In particular, we discuss the Neuroidal Architecture, Inductive Logic Programming and the SNoW system explaining the relationships among these, and thereby oer an explanation of the theoretical basis for the SNoW system. We suggest that extensions of this system along the lines suggested by the theory may provide new levels of scalability and functionality. 1 Introduction The paper explores some aspects of relational knowledge representation and their learnability. While the discussion is to a large extent general it is made in the context of lowlevel natural language processing (NLP) tasks. Recent eorts in NLP emphasize empirical approaches, that attempt to learn how to perform various natural language tasks by being trained using an annotated corpus. These approaches have been used for a wide variety of fairly low level tasks such as partofspeech...
Learning logic programs with structured background knowledge (Extended Abstract)
"... The polynomial PAC  learnability of nonrecursive Horn clauses is studied, based on a characterization of the least general generalization of a set of positive examples in terms of products and homomorphisms. This approach is used to show that nonrecursive Horn clauses are polynomially PAC  learna ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
The polynomial PAC  learnability of nonrecursive Horn clauses is studied, based on a characterization of the least general generalization of a set of positive examples in terms of products and homomorphisms. This approach is used to show that nonrecursive Horn clauses are polynomially PAC  learnable if there is a single binary background predicate and the ground facts in the background knowledge form a forest. If the ground facts in the background knowledge form a disjoint union of cycles then the situation is different as the shortest consistent hypothesis may have exponential length. In this case polynomial PAC  learnability holds if a different concept representation is used. We also consider the learnability of multiple clauses in some restricted cases. 1 Introduction The theoretical study of efficient learnability developed into a separate field of research in the last decade, motivated by the increasing importance of learning in practical applications. Several learning probl...
A Multistrategy Approach to Relational Knowledge Discovery in Databases
 Machine Learning Journal
, 1996
"... . When learning from very large databases, the reduction of complexity is extremely important. Two extremes of making knowledge discovery in databases (KDD) feasible have been put forward. One extreme is to choose a very simple hypothesis language, thereby being capable of very fast learning on real ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
. When learning from very large databases, the reduction of complexity is extremely important. Two extremes of making knowledge discovery in databases (KDD) feasible have been put forward. One extreme is to choose a very simple hypothesis language, thereby being capable of very fast learning on realworld databases. The opposite extreme is to select a small data set, thereby being able to learn very expressive (firstorder logic) hypotheses. A multistrategy approach allows one to include most of these advantages and exclude most of the disadvantages. Simpler learning algorithms detect hierarchies which are used to structure the hypothesis space for a more complex learning algorithm. The better structured the hypothesis space is, the better learning can prune away uninteresting or losing hypotheses and the faster it becomes. We have combined inductive logic programming (ILP) directly with a relational database management system. The ILP algorithm is controlled in a modeldriven way by t...
Agnostic) PAC learning concepts in higherorder logic
 In: Proc. 17th European Conference on Machine Learning (ECML 2006). Springer, LNAI 4212 (2006) 711–718
, 2006
"... Abstract. This paper studies the PAC and agnostic PAC learnability of some standard function classes in the learning in higherorder logic setting introduced by Lloyd et al. In particular, it is shown that the similarity between learning in higherorder logic and traditional attributevalue learning ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. This paper studies the PAC and agnostic PAC learnability of some standard function classes in the learning in higherorder logic setting introduced by Lloyd et al. In particular, it is shown that the similarity between learning in higherorder logic and traditional attributevalue learning allows many results from computational learning theory to be ‘ported ’ to the logical setting with ease. As a direct consequence, a number of nontrivial results in the higherorder setting can be established with straightforward proofs. Our satisfyingly simple analysis provides another case for a more indepth study and wider uptake of the proposed higherorder logic approach to symbolic machine learning. 1
Grammar Approximation by Representative Sublanguage: A New Model for Language Learning
"... We propose a new language learning model that learns a syntacticsemantic grammar from a small number of natural language strings annotated with their semantics, along with basic assumptions about natural language syntax. We show that the search space for grammar induction is a complete grammar latt ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We propose a new language learning model that learns a syntacticsemantic grammar from a small number of natural language strings annotated with their semantics, along with basic assumptions about natural language syntax. We show that the search space for grammar induction is a complete grammar lattice, which guarantees the uniqueness of the learned grammar. 1
On the hardness of learning acyclic conjunctive queries
 Proc. 11th Internat. Conf. on Algorithmic Learning Theory, LNAI 1968 (Springer
, 2000
"... A conjunctive query problem in relational database theory is a problem to determine whether or not a tuple belongs to the answer of a conjunctive query over a database. Here, a tuple and a conjunctive query are regarded as a ground atom and a nonrecursive functionfree definite clause, respectively ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
A conjunctive query problem in relational database theory is a problem to determine whether or not a tuple belongs to the answer of a conjunctive query over a database. Here, a tuple and a conjunctive query are regarded as a ground atom and a nonrecursive functionfree definite clause, respectively. While the conjunctive query problem is NPcomplete in general, it becomes efficiently solvable if a conjunctive query is acyclic. Concerned with this problem, we investigate the learnability of acyclic conjunctive queries from an instance with a jdatabase which is a finite set of ground unit clauses containing at most jary predicate symbols. We deal with two kinds of instances, a simple instance as a set of ground atoms and an extended instance as a set of pairs of a ground atom and a description. Then, we show that, for each j 3, there exist a jdatabase such that acyclic conjunctive queries are not polynomially predictable from an extended instance under the cryptographic assumptions. Also we show that, for each n> 0 and a polynomial p, there exists a p(n)database of size O(2 p(n)) such that predicting Boolean formulae of size p(n) over n variables reduces to predicting acyclic conjunctive queries from a simple instance. This result implies that, if we can ignore the size of a database, then acyclic conjunctive queries are not polynomially predictable from a simple instance under the cryptographic assumptions. Finally, we show that, if either j = 1, or j = 2 and the number of element of a database is at most l ( # 0), then acyclic conjunctive queries are paclearnable from a simple instance with jdatabases.
Complexity parameters of first order classes
 In Proceedings of the 13th International Conference on Inductive Logic Programming
, 2003
"... Abstract. We study several complexity parameters for first order formulas and their suitability for first order learning models. We show that the standard notion of size is not captured by sets of parameters that are used in the literature and thus they cannot give a complete characterization in ter ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We study several complexity parameters for first order formulas and their suitability for first order learning models. We show that the standard notion of size is not captured by sets of parameters that are used in the literature and thus they cannot give a complete characterization in terms of learnability with polynomial resources. We then identify an alternative notion of size and a simple set of parameters that are useful in this sense. Matching lower bounds derived using the Vapnik Chervonenkis dimension complete the picture showing that these parameters are indeed crucial. 1