Results 1  10
of
28
Theories for Mutagenicity: A Study in FirstOrder and FeatureBased Induction
 Artificial Intelligence
, 1996
"... A classic problem from chemistry is used to test a conjecture that in domains for which data are most naturally represented by graphs, theories constructed with Inductive Logic Programming (ILP) will significantly outperform those using simpler featurebased methods. One area that has long been asso ..."
Abstract

Cited by 152 (30 self)
 Add to MetaCart
A classic problem from chemistry is used to test a conjecture that in domains for which data are most naturally represented by graphs, theories constructed with Inductive Logic Programming (ILP) will significantly outperform those using simpler featurebased methods. One area that has long been associated with graphbased or structural representation and reasoning is organic chemistry. In this field, we consider the problem of predicting the mutagenic activity of small molecules: a property that is related to carcinogenicity, and an important consideration in developing less hazardous drugs. By providing an ILP system with progressively more structural information concerning the molecules, we compare the predictive power of the logical theories constructed against benchmarks set by regression, neural, and treebased methods. 1 Introduction Constructing theories to explain observations occupies much of the creative hours of scientists and engineers. Programs from the field of Inductiv...
Feature construction with Inductive Logic Programming: a study of quantitative predictions of chemical activity aided by structural attributes
 Data Mining and Knowledge Discovery
, 1996
"... Recently, computer programs developed within the field of Inductive Logic Programming have received some attention for their ability to construct restricted firstorder logic solutions using problemspecific background knowledge. Prominent applications of such programs have been concerned with d ..."
Abstract

Cited by 64 (9 self)
 Add to MetaCart
Recently, computer programs developed within the field of Inductive Logic Programming have received some attention for their ability to construct restricted firstorder logic solutions using problemspecific background knowledge. Prominent applications of such programs have been concerned with determining "structureactivity" relationships in the areas of molecular biology and chemistry. Typically the task here is to predict the "activity" of a compound, like toxicity, from its chemical structure.
Compression, Significance and Accuracy
, 1992
"... Inductive Logic Programming (ILP) involves learning relational concepts from examples and background knowledge. To date all ILP learning systems make use of tests inherited from propositional and decision tree learning for evaluating the significance of hypotheses. None of these significance t ..."
Abstract

Cited by 43 (5 self)
 Add to MetaCart
Inductive Logic Programming (ILP) involves learning relational concepts from examples and background knowledge. To date all ILP learning systems make use of tests inherited from propositional and decision tree learning for evaluating the significance of hypotheses. None of these significance tests take account of the relevance or utility of the background knowledge. In this paper we describe a method, called HPcompression, of evaluating the significance of a hypothesis based on the degree to which it allows compression of the observed data with respect to the background knowledge. This can be measured by comparing the lengths of the input and output tapes of a reference Turing machine which will generate the examples from the hypothesis and a set of derivational proofs. The model extends an earlier approach of Muggleton by allowing for noise. The truth values of noisy instances are switched by making use of correction codes. The utility of compression as a significance measure is evaluated empirically in three independent domains. In particular, the results show that the existence of positive compression distinguishes a larger number of significant clauses than other significance tests The method is also shown to reliably distinguish artificially introduced noise as incompressible data.
Inverting Implication
 Artificial Intelligence Journal
, 1992
"... All generalisations within logic involve inverting implication. Yet, ever since Plotkin's work in the early 1970's methods of generalising firstorder clauses have involved inverting the clausal subsumption relationship. However, even Plotkin realised that this approach was incomplete. Sin ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
All generalisations within logic involve inverting implication. Yet, ever since Plotkin's work in the early 1970's methods of generalising firstorder clauses have involved inverting the clausal subsumption relationship. However, even Plotkin realised that this approach was incomplete. Since inversion of subsumption is central to many Inductive Logic Programming approaches, this form of incompleteness has been propagated to techniques such as Inverse Resolution and Relative Least General Generalisation. A more complete approach to inverting implication has been attempted with some success recently by Lapointe and Matwin. In the present paper the author derives general solutions to this problem from first principles. It is shown that clausal subsumption is only incomplete for selfrecursive clauses. Avoiding this incompleteness involves algorithms which find "nth roots" of clauses. Completeness and correctness results are proved for a nondeterministic algorithms which constructs nth ro...
A study of two sampling methods for analysing large datasets with ILP
, 1999
"... . This paper is concerned with problems that arise when submitting large quantities of data to analysis by an Inductive Logic Programming (ILP) system. Complexity arguments usually make it prohibitive to analyse such datasets in their entirety. We examine two schemes that allow an ILP system to cons ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
. This paper is concerned with problems that arise when submitting large quantities of data to analysis by an Inductive Logic Programming (ILP) system. Complexity arguments usually make it prohibitive to analyse such datasets in their entirety. We examine two schemes that allow an ILP system to construct theories by sampling from this large pool of data. The first, "subsampling", is a singlesample design in which the utility of a potential rule is evaluated on a randomly selected subsample of the data. The second, "logical windowing", is multiplesample design that tests and sequentially includes errors made by a partially correct theory. Both schemes are derived from techniques developed to enable propositional learning methods (like decision trees) to cope with large datasets. The ILP system CProgol, equipped with each of these methods, is used to construct theories for two datasets  one artificial (a chess endgame) and the other naturally occurring (a language tagging problem). I...
A study of two probabilistic methods for searching large spaces with ILP
, 1999
"... Given sample data and background knowledge encoded in the form of logic programs, a predictive Inductive Logic Programming (ILP) system attempts to nd a set of rules (or clauses) for predicting classi cation labels in the data. Most presentday systems for this purpose rely on some variant of a ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
Given sample data and background knowledge encoded in the form of logic programs, a predictive Inductive Logic Programming (ILP) system attempts to nd a set of rules (or clauses) for predicting classi cation labels in the data. Most presentday systems for this purpose rely on some variant of a generateandtest procedure that repeatedly examines a set of potential candidates (termed here as the \search space") and selects one or more clauses according to some criterion of \goodness". The worstcase timecomplexity of such systems depends critically on: (1) the size of the search space; and (2) the cost of estimating the goodness of a clause. This paper is concerned with addressing the rst issue and is motivated by two principal factors. First, the representation adopted by an ILP system often engenders a search space whose size dominates complexity calculations. Straightforward arguments show that examining fewer clauses should lead to faster execution times. Second,...
The role of background knowledge: using a problem from chemistry to examine the performance of an ILP program
, 1996
"... Inductive Logic Programming (ILP) systems construct explanations for data in terms of domainspecific background information. How does the quality of this information affect the performance of an ILP system? Results from experiments concerned with learning simple programs for list processing suggest ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
Inductive Logic Programming (ILP) systems construct explanations for data in terms of domainspecific background information. How does the quality of this information affect the performance of an ILP system? Results from experiments concerned with learning simple programs for list processing suggest that performance is sensitive to the type and amount of background knowledge provided. In particular, background knowledge that contains large amounts of information that is known to be irrelevant to the problem being considered can, and typically does, prevent an ILP system in its search for an correct explanation.
Numerical reasoning with an ILP system capable of lazy evaluation and customised search
 Journal of Logic Programming
, 1999
"... Using problemspecific background knowledge, computer programs developed within the framework of Inductive Logic Programming (ILP) have been used to construct restricted firstorder logic solutions to scientific problems. However, their approach to the analysis of data with substantial numerical ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
Using problemspecific background knowledge, computer programs developed within the framework of Inductive Logic Programming (ILP) have been used to construct restricted firstorder logic solutions to scientific problems. However, their approach to the analysis of data with substantial numerical content has been largely limited to constructing clauses that: (a) provide qualitative descriptions ("high", "low" etc.) of the values of response variables; and (b) contain simple inequalities restricting the ranges of predictor variables. This has precluded the application of such techniques to scientific and engineering problems requiring a more sophisticated approach. A number of specialised methods have been suggested to remedy this. In contrast, we have chosen to take advantage of the fact that the existing theoretical framework for ILP places very few restrictions of the nature of the background knowledge. We describe two issues of implementation that make it possible to us...
Extracting contextsensitive models in Inductive Logic Programming
 Machine Learning
, 2001
"... Given domainspecific background knowledge and data in the form of examples, an Inductive Logic Programming (ILP) system extracts models in the dataanalytic sense. We view the modelselection step facing an ILP system as a decision problem, the solution of which requires knowledge of the context in ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Given domainspecific background knowledge and data in the form of examples, an Inductive Logic Programming (ILP) system extracts models in the dataanalytic sense. We view the modelselection step facing an ILP system as a decision problem, the solution of which requires knowledge of the context in which the model is to be deployed. In this paper, "context" will be defined by the current specification of the prior class distribution and the client's preferences concerning errors of classification. Within this restricted setting, we consider the use of an ILP system in situations where: (a) contexts can change regularly. This can arise for example, from changes to class distributions or misclassification costs; and (b) the data are from observational studies. That is, they may not have been collected with any particular context in mind. Some repercussions of these are: (a) any one model may not be the optimal choice for all contexts; and (b) not all the background information provided may be relevant for all contexts. Using results from the analysis of Receiver Operating Characteristic curves, we investigate a technique that can equip an ILP system to reject those models that cannot possibly be optimal in any context. We present empirical results from using the technique to analyse two datasets concerned with the toxicity of chemicals (in particular, their mutagenic and carcinogenic properties). Clients can and typically do, approach such datasets with quite different requirements. For example, a synthetic chemist would require models with a low rate of commission errors which could be used to direct efficiently the synthesis of new compounds. A toxicologist on the other hand, would prefer models with a low rate of omission errors. This would enable a more complete identificati...
Learning an Approximation to Inductive Logic Programming Clause Evaluation
 In Proceedings of the 14th international
, 2004
"... One challenge faced by many Inductive Logic Programming (ILP) systems is poor scalability to problems with large search spaces and many examples. Randomized search methods such as stochastic clause selection (SCS) and rapid random restarts (RRR) have proven somewhat successful at addressing this ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
One challenge faced by many Inductive Logic Programming (ILP) systems is poor scalability to problems with large search spaces and many examples. Randomized search methods such as stochastic clause selection (SCS) and rapid random restarts (RRR) have proven somewhat successful at addressing this weakness. However, on datasets where hypothesis evaluation is computationally expensive, even these algorithms may take unreasonably long to discover a good solution. We attempt to improve the performance of these algorithms on datasets by learning an approximation to ILP hypothesis evaluation. We generate a small set of hypotheses, uniformly sampled from the space of candidate hypotheses, and evaluate this set on actual data. These hypotheses and their corresponding evaluation scores serve as training data for learning an approximate hypothesis evaluator. We outline three techniques that make use of the trained evaluationfunction approximator in order to reduce the computation required during an ILP hypothesis search. We test our approximate clause evaluation algorithm using the popular ILP system Aleph.