Results 1 
4 of
4
Theory Refinement Combining Analytical and Empirical Methods
 Artificial Intelligence
, 1994
"... This article describes a comprehensive approach to automatic theory revision. Given an imperfect theory, the approach combines explanation attempts for incorrectly classified examples in order to identify the failing portions of the theory. For each theory fault, correlated subsets of the examples a ..."
Abstract

Cited by 126 (7 self)
 Add to MetaCart
(Show Context)
This article describes a comprehensive approach to automatic theory revision. Given an imperfect theory, the approach combines explanation attempts for incorrectly classified examples in order to identify the failing portions of the theory. For each theory fault, correlated subsets of the examples are used to inductively generate a correction. Because the corrections are focused, they tend to preserve the structure of the original theory. Because the system starts with an approximate domain theory, in general fewer training examples are required to attain a given level of performance (classification accuracy) compared to a purely empirical system. The approach applies to classification systems employing a propositional Hornclause theory. The system has been tested in a variety of application domains, and results are presented for problems in the domains of molecular biology and plant disease diagnosis. 1 INTRODUCTION 2 1 Introduction One of the most difficult problems in the develo...
An Efficient FirstOrder HornClause Abduction System Based on the ATMS
 IN PROCEEDINGS OF THE NINTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 1991
"... This paper presents an algorithm for firstorder Hornclause abduction that uses an ATMS to avoid redundant computation. This algorithm is either more efficient or more general than any other previous abduction algorithm. Since computing all minimal abductive explanations is intractable, we al ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
This paper presents an algorithm for firstorder Hornclause abduction that uses an ATMS to avoid redundant computation. This algorithm is either more efficient or more general than any other previous abduction algorithm. Since computing all minimal abductive explanations is intractable, we also present a heuristic version of the algorithm that uses beam search to compute a subset of the simplest explanations. We present empirical results on a broad range of abduction problems from text understanding, plan recognition, and device diagnosis which demonstrate that our algorithm is at least an order of magnitude faster than an alternative abduction algorithm that does not use an ATMS.
An Approach to Understanding Novel Failures
, 1994
"... This thesis describes a diagnostic technique for explaining unanticipated modes of failure in continuousvariable systems. Previous approaches in modelbased diagnosis have traditionally suffered from either a dependence on explicit fault models or a tendency to produce unintuitive results. This res ..."
Abstract
 Add to MetaCart
(Show Context)
This thesis describes a diagnostic technique for explaining unanticipated modes of failure in continuousvariable systems. Previous approaches in modelbased diagnosis have traditionally suffered from either a dependence on explicit fault models or a tendency to produce unintuitive results. This research aims at achieving the explanatory power of explicit fault models, without sacrificing the robustness of consistencybased diagnosis. The unique compositional nature of the processcentered models of Qualitative Process Theory makes the application of modelbased diagnostic techniques both nontrivial and rewarding. Rather than relying on explicit fault models, this approach utilizes a general domain theory to model the broken device. Given a sufficiently broad domain theory, symptoms are explained in terms of a transformed physical structure. Generative fault models replace explicit, preenumerated fault models, thereby increasing robustness for identifying novel faults. This approach combines the efficiency of the consistencybased approach with the explanatory power of abductive backchaining. Candi
Elsevier ARTINT 1071 Theory refinement combining
, 1991
"... Ourston, D. and R.J. Mooney, Theory refinement combining analytical and empirical methods, Artificial Intelligence 66 (1994) 273309. This article describes a comprehensive system for automatic theory (knowledge base) refinement. The system applies to classification tasks employing a propositional H ..."
Abstract
 Add to MetaCart
(Show Context)
Ourston, D. and R.J. Mooney, Theory refinement combining analytical and empirical methods, Artificial Intelligence 66 (1994) 273309. This article describes a comprehensive system for automatic theory (knowledge base) refinement. The system applies to classification tasks employing a propositional Hornclause domain theory. Given an imperfect domain theory and a set of training examples, the approach uses partial and incorrect proofs to identify potentially faulty rules. For each faulty rule, subsets of examples are used to inductively generate a correction. Because the system starts with an approximate domain theory, fewer training examples are generally required to attain a given level of classification accuracy compared to a purely empirical learning system. The system has been tested in two previously explored application domains: recognizing important classes of DNA sequences and diagnosing diseased soybean plants. 1.