Results 1 
6 of
6
Delaying the Choice of Bias: A Disjunctive Version Space Approach
 Proceedings of the 13 th International Conference on Machine Learning
, 1996
"... This paper is concerned with alleviating the choice of learning biases via a twostep process: \Gamma The set of all hypotheses that are consistent with the data and cover at least one training example, is given an implicit characterization of polynomial complexity. The only bias governing this in ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
This paper is concerned with alleviating the choice of learning biases via a twostep process: \Gamma The set of all hypotheses that are consistent with the data and cover at least one training example, is given an implicit characterization of polynomial complexity. The only bias governing this induction phase is that of the language of hypotheses. \Gamma Classification of further examples is done via interpreting this implicit theory; the interpretation mechanism allows one to relax the consistency requirement and tune the specificity of the theory at no extra induction cost. Experimental validations demonstrate very good results on both nominal and numerical datasets. 1 INTRODUCTION In a seminal paper, Mitchell (1980) introduced the term of bias to refer to any basis for choosing one generalization over another, other than strict consistency with the training instances. Learning biases proceed from at least two motivations: improve the predictive power of the induced theory (Mit...
CaMeL: Learning method preconditions for HTN planning
 Proceedings of the Sixth International Conference on AI Planning and Scheduling
, 2002
"... A great challenge in using any planning system to solve realworld problems is the difficulty of acquiring the domain knowledge that the system will need. We present a way to address part of this problem, in the context of Hierarchical Task Network (HTN) planning, by having the planning system incre ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
A great challenge in using any planning system to solve realworld problems is the difficulty of acquiring the domain knowledge that the system will need. We present a way to address part of this problem, in the context of Hierarchical Task Network (HTN) planning, by having the planning system incrementally learn conditions for HTN methods under expert supervision. We present a general formal framework for learning HTN methods, and a supervised learning algorithm, named CaMeL, based on this formalism. We present theoretical results about CaMeL’s soundness, completeness, and convergence properties. We also report experimental results about its speed of convergence under different conditions. The experimental results suggest that CaMeL has the potential to be useful in realworld applications.
Constraint Inductive Logic Programming
, 1996
"... . This paper is concerned with learning from positive and negative examples expressed in firstorder logic with numerical constants. The presented approach is based on the cooperation of Inductive Logic Programming (ILP) and Constraint Logic Programming (CLP), and proceeds as follows: ffl A discrim ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
. This paper is concerned with learning from positive and negative examples expressed in firstorder logic with numerical constants. The presented approach is based on the cooperation of Inductive Logic Programming (ILP) and Constraint Logic Programming (CLP), and proceeds as follows: ffl A discriminant induction problem is shown to be equivalent to a Constraint Satisfaction Problem (CSP): all constrained clauses covering positive examples and rejecting negative examples can be trivially derived from the solutions of this CSP. ffl Solving this CSP then allows to build the G set of solutions in terms of Version Spaces; this resolution can be delegated to a constraint solver. ffl This CSP provides a tractable computational characterization of G, which is sufficient to classify further examples and offers simple countingbased heuristics to resist noisy data. In this hybrid ILPCLP approach, CLP performs most of the search involved in inductive learning; the advantage is to benefit fro...
Knowledge Discovery in Texts: A Definition, and Applications
 Proc. of the 11th International Symposium on Foundations of Intelligent Systems (ISMIS99
, 1999
"... . The first part of this paper will give a general view of Knowledge Discovery in Data (KDD) in order to insist on how much it differs from the fields it stems from, and in some cases, how much it opposes them. The second part will a definition of Knowledge Discovery in Texts (KDT), as opposed to w ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
. The first part of this paper will give a general view of Knowledge Discovery in Data (KDD) in order to insist on how much it differs from the fields it stems from, and in some cases, how much it opposes them. The second part will a definition of Knowledge Discovery in Texts (KDT), as opposed to what is known presently under the name of information retrieval, information extraction, or knowledge extraction. I will provide an example of a reallife set of rules obtained by what I want to define as KDT techniques. 1.0 Introduction KDD is better known in the industry under the name of Data Mining (DM). Actually, industrialists should be more interested in KDD which comprises the whole process of data selection, data cleaning, transfer to a DM technique, applying the DM technique, validating the results of the DM technique, and finally interpreting them for the user. In general, this process is a cycle that improves under the criticism of the expert. Inversely, DM designates a set of in...
Learning preconditions for planning from plan traces and HTN structure
 Computational Intelligence
, 2005
"... Agreat challenge in developing planning systems for practical applications is the difficulty of acquiring the domain information needed to guide such systems. This paper describes a way to learn some of that knowledge. More specifically, the following points are discussed. (1) We introduce a theoret ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Agreat challenge in developing planning systems for practical applications is the difficulty of acquiring the domain information needed to guide such systems. This paper describes a way to learn some of that knowledge. More specifically, the following points are discussed. (1) We introduce a theoretical basis for formally defining algorithms that learn preconditions for Hierarchical Task Network (HTN) methods. (2) We describe Candidate Elimination Method Learner (CaMeL), a supervised, eager, and incremental learning process for preconditions of HTN methods. We state and prove theorems about CaMeL’s soundness, completeness, and convergence properties. (3) We present empirical results about CaMeL’s convergence under various conditions. Among other things, CaMeL converges the fastest on the preconditions of the HTN methods that are needed the most often. Thus CaMeL’s output can be useful even before it has fully converged.
PLANNING BY EXAMINING DECOMPOSITIONAL PLAN TRACES
, 2006
"... Knowledgebased, handtailorable planning systems are the most promising planning systems to solve realwold planning problems. A great challenge in using any such knowledgebased planning system in real world is the difficulty of acquiring the domain knowledge needed to guide such a system. The obj ..."
Abstract
 Add to MetaCart
Knowledgebased, handtailorable planning systems are the most promising planning systems to solve realwold planning problems. A great challenge in using any such knowledgebased planning system in real world is the difficulty of acquiring the domain knowledge needed to guide such a system. The objective of this dissertation is to investigate ways to acquire parts or all of this information automatically in the context of Hierarchical Task Network (HTN) planning. Knowledge acquisition is done by examining the decisions made by an expert problem solver while solving planning problems in a given HTN domain. This dissertation first describes a general framework to define what an HTN domain learner is, what its inputs and outputs are, how to evaluate such a learner, and what soundness, completeness and convergence mean in this context. Afterwards, two different HTN domain learning algorithms CaMeL and HDL are discussed. These two algorithms are then extended to handle noise in training samples, and to help HTN planners to start planning before full convergence is achieved by CaMeL or HDL. These extensions result in a family of different HTN domain learning algorithms, each of which can be useful un