Results 1  10
of
21
First order jkclausal theories are PAClearnable
 Artificial Intelligence
, 1994
"... We present positive PAClearning results for the nonmonotonic inductive logic programming setting. In particular, we show that first order rangerestricted clausal theories that consist of clauses with up to k literals of size at most j each are polynomialsample polynomialtime PAClearnable with on ..."
Abstract

Cited by 64 (27 self)
 Add to MetaCart
We present positive PAClearning results for the nonmonotonic inductive logic programming setting. In particular, we show that first order rangerestricted clausal theories that consist of clauses with up to k literals of size at most j each are polynomialsample polynomialtime PAClearnable with onesided error from positive examples only. In our framework, concepts are clausal theories and examples are finite interpretations. We discuss the problems encountered when learning theories which only have infinite nontrivial models and propose a way to avoid these problems using a representation change called flattening. Finally, we compare our results to PAClearnability results for the normal inductive logic programming setting. 1
Relational Learning via Propositional Algorithms: An Information Extraction Case Study
, 2001
"... This paper develops a new paradigm for relational learning which allows for the representation and learning of relational information using propositional means. This paradigm suggests different tradeoffs than those in the traditional approach to this problem  the ILP approach  and as a resu ..."
Abstract

Cited by 44 (12 self)
 Add to MetaCart
This paper develops a new paradigm for relational learning which allows for the representation and learning of relational information using propositional means. This paradigm suggests different tradeoffs than those in the traditional approach to this problem  the ILP approach  and as a result it enjoys several significant advantages over it. In particular, the new paradigm is more flexible and allows the use of any propositional algorithm, including probabilistic algorithms, within it. We evaluate the new approach on an important and relationintensive task  Information Extraction  and show that it outperforms existing methods while being orders of magnitude more efficient. 1
Relational Learning for NLP using Linear Threshold Elements
, 1999
"... We describe a coherent view of learning and reasoning with relational representations in the context of natural language processing. In particular, we discuss the Neuroidal Architecture, Inductive Logic Programming and the SNoW system explaining the relationships among these, and thereby oer an expl ..."
Abstract

Cited by 29 (12 self)
 Add to MetaCart
We describe a coherent view of learning and reasoning with relational representations in the context of natural language processing. In particular, we discuss the Neuroidal Architecture, Inductive Logic Programming and the SNoW system explaining the relationships among these, and thereby oer an explanation of the theoretical basis for the SNoW system. We suggest that extensions of this system along the lines suggested by the theory may provide new levels of scalability and functionality. 1 Introduction The paper explores some aspects of relational knowledge representation and their learnability. While the discussion is to a large extent general it is made in the context of lowlevel natural language processing (NLP) tasks. Recent eorts in NLP emphasize empirical approaches, that attempt to learn how to perform various natural language tasks by being trained using an annotated corpus. These approaches have been used for a wide variety of fairly low level tasks such as partofspeech...
Robust Logics
"... Suppose that we wish to learn from examples and counterexamples a criterion for recognizing whether an assembly of wooden blocks constitutes an arch. Suppose also that we have preprogrammed recognizers for various relationships e.g. ontopof(x; y), above(x; y), etc. and believe that some possibl ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
Suppose that we wish to learn from examples and counterexamples a criterion for recognizing whether an assembly of wooden blocks constitutes an arch. Suppose also that we have preprogrammed recognizers for various relationships e.g. ontopof(x; y), above(x; y), etc. and believe that some possibly complex expression in terms of these base relationships should suffice to approximate the desired notion of an arch. How can we formulate such a relational learning problem so as to exploit the benefits that are demonstrably available in propositional learning, such as attributeefficient learning by linear separators, and errorresilient learning? We believe that learning in a general setting that allows for multiple objects and relations in this way is a fundamental key to resolving the following dilemma that arises in the design of intelligent systems: Mathematical logic is an attractive language of description because it has clear semantics and sound proof procedures. However, as a basis for large programmed systems it leads to brittleness because, in practice, consistent usage of the various predicate names throughout a system cannot be guaranteed, except in application areas such as mathematics where the viability of the axiomatic method has been demonstrated independently. In this paper we develop the following approach to circumventing this dilemma. We suggest that brittleness can be overcome by using a new kind of logic in which each statement is learnable. By allowing the system to learn rules empirically from the environment, relative to any particular programs it may have for recognizing some base predicates, we enable the system to acquire a set of statements approximately consistent with each other and with the world, without the need for a globally knowledgeable and consistent programmer. We illustrate
Phase Transitions in Relational Learning
, 2000
"... One of the major limitations of relational learning is due to the complexity of verifying hypotheses on examples. In this paper we investigate this task in light of recent published results, which show that many hard problems exhibit a narrow “phase transition ” with respect to some order paramete ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
One of the major limitations of relational learning is due to the complexity of verifying hypotheses on examples. In this paper we investigate this task in light of recent published results, which show that many hard problems exhibit a narrow “phase transition ” with respect to some order parameter, coupled with a large increase in computational complexity. First we show that matching a class of artificially generated Horn clauses on ground instances presents a typical phase transition in solvability with respect to both the number of literals in the clause and the number of constants occurring in the instance to match. Then, we demonstrate that phase transitions also appear in realworld learning problems, and that learners tend to generate inductive hypotheses lying exactly on the phase transition. On the other hand, an extensive experimenting revealed that not every matching problem inside the phase transition region is intractable. However, unfortunately, identifying those that are feasible cannot be done solely on the basis of the order parameters. To face this problem, we propose a method, based on a Monte Carlo algorithm, to estimate online the likelihood that the current matching problem will exceed a given amount of computational resources. The impact of the above findings on relational learning is discussed.
Learning logic programs with structured background knowledge (Extended Abstract)
"... The polynomial PAC  learnability of nonrecursive Horn clauses is studied, based on a characterization of the least general generalization of a set of positive examples in terms of products and homomorphisms. This approach is used to show that nonrecursive Horn clauses are polynomially PAC  learna ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
The polynomial PAC  learnability of nonrecursive Horn clauses is studied, based on a characterization of the least general generalization of a set of positive examples in terms of products and homomorphisms. This approach is used to show that nonrecursive Horn clauses are polynomially PAC  learnable if there is a single binary background predicate and the ground facts in the background knowledge form a forest. If the ground facts in the background knowledge form a disjoint union of cycles then the situation is different as the shortest consistent hypothesis may have exponential length. In this case polynomial PAC  learnability holds if a different concept representation is used. We also consider the learnability of multiple clauses in some restricted cases. 1 Introduction The theoretical study of efficient learnability developed into a separate field of research in the last decade, motivated by the increasing importance of learning in practical applications. Several learning probl...
A Multistrategy Approach to Relational Knowledge Discovery in Databases
 Machine Learning Journal
, 1996
"... . When learning from very large databases, the reduction of complexity is extremely important. Two extremes of making knowledge discovery in databases (KDD) feasible have been put forward. One extreme is to choose a very simple hypothesis language, thereby being capable of very fast learning on real ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
. When learning from very large databases, the reduction of complexity is extremely important. Two extremes of making knowledge discovery in databases (KDD) feasible have been put forward. One extreme is to choose a very simple hypothesis language, thereby being capable of very fast learning on realworld databases. The opposite extreme is to select a small data set, thereby being able to learn very expressive (firstorder logic) hypotheses. A multistrategy approach allows one to include most of these advantages and exclude most of the disadvantages. Simpler learning algorithms detect hierarchies which are used to structure the hypothesis space for a more complex learning algorithm. The better structured the hypothesis space is, the better learning can prune away uninteresting or losing hypotheses and the faster it becomes. We have combined inductive logic programming (ILP) directly with a relational database management system. The ILP algorithm is controlled in a modeldriven way by t...
Grammar Approximation by Representative Sublanguage: A New Model for Language Learning
"... We propose a new language learning model that learns a syntacticsemantic grammar from a small number of natural language strings annotated with their semantics, along with basic assumptions about natural language syntax. We show that the search space for grammar induction is a complete grammar latt ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We propose a new language learning model that learns a syntacticsemantic grammar from a small number of natural language strings annotated with their semantics, along with basic assumptions about natural language syntax. We show that the search space for grammar induction is a complete grammar lattice, which guarantees the uniqueness of the learned grammar. 1
(Agnostic) PAC learning concepts in higherorder logic
 IN: PROC. 17TH EUROPEAN CONFERENCE ON MACHINE LEARNING (ECML 2006). SPRINGER, LNAI 4212 (2006) 711–718
, 2006
"... This paper studies the PAC and agnostic PAC learnability of some standard function classes in the learning in higherorder logic setting introduced by Lloyd et al. In particular, it is shown that the similarity between learning in higherorder logic and traditional attributevalue learning allows man ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
This paper studies the PAC and agnostic PAC learnability of some standard function classes in the learning in higherorder logic setting introduced by Lloyd et al. In particular, it is shown that the similarity between learning in higherorder logic and traditional attributevalue learning allows many results from computational learning theory to be ‘ported ’ to the logical setting with ease. As a direct consequence, a number of nontrivial results in the higherorder setting can be established with straightforward proofs. Our satisfyingly simple analysis provides another case for a more indepth study and wider uptake of the proposed higherorder logic approach to symbolic machine learning.
On learnability and predicate logic (Extended Abstract)
"... ) W. Maass Gy. Tur'an y 1 Introduction Several applications of learning in artificial intelligence use a predicate logic formalism. The theoretical study of efficient learnability in this area, in the framework of computational learning theory started relatively recently, considering, for ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
) W. Maass Gy. Tur'an y 1 Introduction Several applications of learning in artificial intelligence use a predicate logic formalism. The theoretical study of efficient learnability in this area, in the framework of computational learning theory started relatively recently, considering, for example, the PAC ( Probably Approximately Correct ) learnability of logic programs and description logic ( see Cohen and Hirsh [6] , the survey of Kietz and Dzeroski [11] and the further references in these papers ). In this paper we discuss a model theoretic approach to learnability in predicate logic. Results in this direction were obtained by Osherson, Stob and Weinstein [15] . It is assumed that there is a firstorder structure given. Instances are tuples of elements of the universe of the model, and concepts are relations that are definable in the model by formulas from some prespecified class. The goal of the learner is to identify an unknown target concept in some specific model of lear...