Results 1 
6 of
6
A multistrategy approach to theory refinement
 In Proceedings of the International Workshop on Multistrategy Learning
, 1991
"... This chapter describes a multistrategy system that employs independent modules for deductive, abductive, and inductive reasoning to revise an arbitrarily incorrect propositional Hornclause domain theory to t a set of preclassi ed training instances. By combining such diverse methods, Either is able ..."
Abstract

Cited by 37 (5 self)
 Add to MetaCart
This chapter describes a multistrategy system that employs independent modules for deductive, abductive, and inductive reasoning to revise an arbitrarily incorrect propositional Hornclause domain theory to t a set of preclassi ed training instances. By combining such diverse methods, Either is able to handle a wider range of imperfect theories than other theory revision systems while guaranteeing that the revised theory will be consistent with the training data. Either has successfully revised two actual expert theories, one in molecular biology and one in plant pathology. The results con rm the hypothesis that using a multistrategy system to learn from both theory and data gives better results than using either theory or data alone. 1
Decision Graphs  An Extension of Decision Trees
, 1993
"... : In this paper, we examine Decision Graphs, a generalization of decision trees. We present an inference scheme to construct decision graphs using the Minimum Message Length Principle. Empirical tests demonstrate that this scheme compares favourably with other decision tree inference schemes. This w ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
: In this paper, we examine Decision Graphs, a generalization of decision trees. We present an inference scheme to construct decision graphs using the Minimum Message Length Principle. Empirical tests demonstrate that this scheme compares favourably with other decision tree inference schemes. This work provides a metric for comparing the relative merit of the decision tree and decision graph formalisms for a particular domain. 1 Introduction In this paper, we examine the problem of inferring a decision procedure from a set of examples. We examine the decision graph [5, 1, 16, 15, 14], a generalization of the decision tree [3, 18], and propose a method to construct decision graphs based upon Wallace's Minimum Message Length Principle (MMLP) [24, 10, 25]. The MMLP is related to Rissanen's Minimum Description Length Principle (MDLP) [21, 22, 20]. For the reader unfamiliar with minimum encoding methods (MML and MDL), a good introduction to the area is given by Georgeff [10]. We formalize ...
Using the Minimum Description Length Principle to Infer Reduced Ordered Decision Graphs
 Machine Learning
, 1996
"... . We propose an algorithm for the inference of decision graphs from a set of labeled instances. In particular, we propose to infer decision graphs where the variables can only be tested in accordance with a given order and no redundant nodes exist. This type of graphs, reduced ordered decision graph ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
. We propose an algorithm for the inference of decision graphs from a set of labeled instances. In particular, we propose to infer decision graphs where the variables can only be tested in accordance with a given order and no redundant nodes exist. This type of graphs, reduced ordered decision graphs, can be used as canonical representations of Boolean functions and can be manipulated using algorithms developed for that purpose. This work proposes a local optimization algorithm that generates compact decision graphs by performing local changes in an existing graph until a minimum is reached. The algorithm uses Rissanen's minimum description length principle to control the tradeoff between accuracy in the training set and complexity of the description. Techniques for the selection of the initial decision graph and for the selection of an appropriate ordering of the variables are also presented. Experimental results obtained using this algorithm in two sets of examples are presented and ...
Inferring Reduced Ordered Decision Graphs of Minimal Description Length
 PROCEEDINGS OF THE TWELFTH INTERNATIONAL CONFERENCE ON MACHINE LEARNING
, 1994
"... This work describes an approach for the inference of reduced ordered decision graphs from training sets. Reduced ordered decision graphs (RODGs) are graphs where the variables can only be tested in accordance with a prespecified order and no redundant nodes exist. RODGs have several interesting pro ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
This work describes an approach for the inference of reduced ordered decision graphs from training sets. Reduced ordered decision graphs (RODGs) are graphs where the variables can only be tested in accordance with a prespecified order and no redundant nodes exist. RODGs have several interesting properties that has made them the representation of choice for the manipulation of Boolean functions in the logic synthesis community. We derive a RODG representation of the function implemented by a decision tree. This decision tree can be obtained from a training set using any one of the different algorithms proposed to date. This RODG is then used as the starting point for an algorithm that derives another RODG of minimal description length. The reduction in complexity is obtained by performing incremental changes in the RODG. By using ordered decision diagrams, the task of identifying common subgraphs is made much simpler than the identification of common subtrees in a decision tree. Ordered decision graphs require that a variable ordering be specified in advance. The algorithm that derives such an ordering is based on a reordering algorithm commonly used that finds a locally optimal ordering by swapping the order of two adjacent variables. These algorithms are tested in a set of examples that are known to be hard to solve using decision trees. The results show that when an effective reduction of the description length is obtained, significant gains in generalization accuracycan be achieved. In all casesthe generalization accuracy of the final RODG was better than the generalization accuracy of the decision tree that was used as the starting point.
UNDERSTANDING WHAT MACHINE LEARNING PRODUCES Part II: Knowledge visualization techniques
"... Abstractâ€”Researchers in machine learning use decision trees, production rules, and decision graphs for visualizing classification data. Part I of this paper surveyed these representations, paying particular attention to their comprehensibility for nonspecialist users. Part II turns attention to kno ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstractâ€”Researchers in machine learning use decision trees, production rules, and decision graphs for visualizing classification data. Part I of this paper surveyed these representations, paying particular attention to their comprehensibility for nonspecialist users. Part II turns attention to knowledge visualizationâ€”the graphic form in which a structure is portrayed and its strong influence on comprehensibility. We analyze the questions that, in our experience, end users of machine learning tend to ask of the structures inferred from their empirical data. By mapping these questions onto visualization tasks, we have created new graphical representations that show the flow of examples through a decision structure. These knowledge visualization techniques are particularly appropriate in helping to answer the questions that users typically ask, and we describe their use in discovering new properties of a data set. In the case of decision trees, an automated software tool has been developed to construct the visualizations. Decision trees, production rules and decision graphs are widely used for representing the results of machine learning. As the first part of this paper showed, many schemes have been developed to assist comprehension by reducing the amount of gratuitous information in such
Decision Jungles: Compact and Rich Models for Classification
"... Randomized decision trees and forests have a rich history in machine learning and have seen considerable success in application, perhaps particularly so for computer vision. However, they face a fundamental limitation: given enough data, the number of nodes in decision trees will grow exponentially ..."
Abstract
 Add to MetaCart
Randomized decision trees and forests have a rich history in machine learning and have seen considerable success in application, perhaps particularly so for computer vision. However, they face a fundamental limitation: given enough data, the number of nodes in decision trees will grow exponentially with depth. For certain applications, for example on mobile or embedded processors, memory is a limited resource, and so the exponential growth of trees limits their depth, and thus their potential accuracy. This paper proposes decision jungles, revisiting the idea of ensembles of rooted decision directed acyclic graphs (DAGs), and shows these to be compact and powerful discriminative models for classification. Unlike conventional decision trees that only allow one path to every node, a DAG in a decision jungle allows multiple paths from the root to each leaf. We present and compare two new node merging algorithms that jointly optimize both the features and the structure of the DAGs efficiently. During training, node splitting and node merging are driven by the minimization of exactly the same objective function, here the weighted sum of entropies at the leaves. Results on varied datasets show that, compared to decision forests and several other baselines, decision jungles require dramatically less memory while considerably improving generalization. 1