Results 1  10
of
126
Feature Subset Selection Using A Genetic Algorithm
, 1997
"... : Practical pattern classification and knowledge discovery problems require selection of a subset of attributes or features (from a much larger set) to represent the patterns to be classified. This is due to the fact that the performance of the classifier (usually induced by some learning algorithm) ..."
Abstract

Cited by 184 (6 self)
 Add to MetaCart
: Practical pattern classification and knowledge discovery problems require selection of a subset of attributes or features (from a much larger set) to represent the patterns to be classified. This is due to the fact that the performance of the classifier (usually induced by some learning algorithm) and the cost of classification are sensitive to the choice of the features used to construct the classifier. Exhaustive evaluation of possible feature subsets is usually infeasible in practice because of the large amount of computational effort required. Genetic algorithms, which belong to a class of randomized heuristic search techniques, offer an attractive approach to find nearoptimal solutions to such optimization problems. This paper presents an approach to feature subset selection using a genetic algorithm. Some advantages of this approach include the ability to accommodate multiple criteria such as accuracy and cost of classification into the feature selection process and to find fe...
Extracting TreeStructured Representations of Trained Networks
 Advances in Neural Information Processing Systems
, 1996
"... A significant limitation of neural networks is that the representations they learn are usually incomprehensible to humans. We present a novel algorithm, Trepan, for extracting comprehensible, symbolic representations from trained neural networks. Our algorithm uses queries to induce a decision tree ..."
Abstract

Cited by 87 (10 self)
 Add to MetaCart
A significant limitation of neural networks is that the representations they learn are usually incomprehensible to humans. We present a novel algorithm, Trepan, for extracting comprehensible, symbolic representations from trained neural networks. Our algorithm uses queries to induce a decision tree that approximates the concept represented by a given network. Our experiments demonstrate that Trepan is able to produce decision trees that maintain a high level of fidelity to their respective networks while being comprehensible and accurate. Unlike previous work in this area, our algorithm is general in its applicability and scales well to large networks and problems with highdimensional input spaces.
Using Sampling and Queries to Extract Rules from Trained Neural Networks
 In Proceedings of the Eleventh International Conference on Machine Learning
, 1994
"... Concepts learned by neural networks are difficult to understand because they are represented using large assemblages of realvalued parameters. One approach to understanding trained neural networks is to extract symbolic rules that describe their classification behavior. There are several existing r ..."
Abstract

Cited by 73 (3 self)
 Add to MetaCart
Concepts learned by neural networks are difficult to understand because they are represented using large assemblages of realvalued parameters. One approach to understanding trained neural networks is to extract symbolic rules that describe their classification behavior. There are several existing ruleextraction approaches that operate by searching for such rules. We present a novel method that casts rule extraction not as a search problem, but instead as a learning problem. In addition to learning from training examples, our method exploits the property that networks can be efficiently queried. We describe algorithms for extracting both conjunctive and MofN rules, and present experiments that show that our method is more efficient than conventional searchbased approaches. 1 INTRODUCTION A problem that arises when neural networks are used for supervised learning tasks is that, after training, it is usually difficult to understand the concept representations formed by the networks....
Extracting Comprehensible Models from Trained Neural Networks
, 1996
"... To Mom, Dad, and Susan, for their support and encouragement. ..."
Abstract

Cited by 70 (4 self)
 Add to MetaCart
To Mom, Dad, and Susan, for their support and encouragement.
A new methodology of extraction, optimization and application of crisp and fuzzy logical rules
 IEEE TRANSACTIONS ON NEURAL NETWORKS
, 2001
"... A new methodology of extraction, optimization, and application of sets of logical rules is described. Neural networks are used for initial rule extraction, local, or global minimization procedures for optimization, and Gaussian uncertainties of measurements are assumed during application of logical ..."
Abstract

Cited by 49 (23 self)
 Add to MetaCart
A new methodology of extraction, optimization, and application of sets of logical rules is described. Neural networks are used for initial rule extraction, local, or global minimization procedures for optimization, and Gaussian uncertainties of measurements are assumed during application of logical rules. Algorithms for extraction of logical rules from data with realvalued features require determination of linguistic variables or membership functions. Contextdependent membership functions for crisp and fuzzy linguistic variables are introduced and methods of their determination described. Several neural and machine learning methods of logical rule extraction generating initial rules are described, based on constrained multilayer perceptron, networks with localized transfer functions or on separability criteria for determination of linguistic variables. A tradeoff between accuracy/simplicity is explored at the rule extraction stage and between rejection/error level at the optimization stage. Gaussian uncertainties of measurements are assumed during application of crisp logical rules, leading to “soft trapezoidal” membership functions and allowing to optimize the linguistic variables using gradient procedures. Numerous applications of this methodology to benchmark and reallife problems are reported and very simple crisp logical rules for many datasets provided.
Symbolic knowledge extraction from trained neural networks: A sound approach
, 2001
"... Although neural networks have shown very good performance in many application domains, one of their main drawbacks lies in the incapacity to provide an explanation for the underlying reasoning mechanisms. The "explanation capability" of neural networks can be achieved by the extraction of symbolic k ..."
Abstract

Cited by 47 (7 self)
 Add to MetaCart
Although neural networks have shown very good performance in many application domains, one of their main drawbacks lies in the incapacity to provide an explanation for the underlying reasoning mechanisms. The "explanation capability" of neural networks can be achieved by the extraction of symbolic knowledge. In this paper, we present a new method of extraction that captures nonmonotonic rules encoded in the network, and prove that such a method is sound. We start by discussing some of the main problems of knowledge extraction methods. We then discuss how these problems may be ameliorated. To this end, a partial ordering on the set of input vectors of a network is defined, as well as a number of pruning and simplification rules. The pruning rules are then used to reduce the search space of the extraction algorithm during a pedagogical extraction, whereas the simplification rules are used to reduce the size of the extracted set of rules. We show that, in the case of regular networks, the extraction algorithm is sound and complete. We proceed to extend the extraction algorithm to the class of nonregular networks, the general case. We show that nonregular networks always contain regularities in their subnetworks. As a result, the underlying extraction method for regular networks can be applied, but now in a decompositional fashion. In order to combine the sets of rules extracted from each subnetwork into the final set of rules, we use a method whereby we are able to keep the soundness of the extraction algorithm. Finally, we present the results of an empirical analysis of the extraction system, using traditional examples and realworld application problems. The results have shown that a very high fidelity between the extracted set of rules and the network can be achieved....
Constructive Neural Network Learning Algorithms for Pattern Classification
, 2000
"... Constructive learning algorithms offer an attractive approach for the incremental construction of nearminimal neuralnetwork architectures for pattern classification. They help overcome the need for ad hoc and often inappropriate choices of network topology in algorithms that search for suitable we ..."
Abstract

Cited by 45 (14 self)
 Add to MetaCart
Constructive learning algorithms offer an attractive approach for the incremental construction of nearminimal neuralnetwork architectures for pattern classification. They help overcome the need for ad hoc and often inappropriate choices of network topology in algorithms that search for suitable weights in a priori fixed network architectures. Several such algorithms are proposed in the literature and shown to converge to zero classification errors (under certain assumptions) on tasks that involve learning a binary to binary mapping (i.e., classification problems involving binaryvalued input attributes and two output categories). We present two constructive learning algorithms MPyramidreal and MTilingreal that extend the pyramid and tiling algorithms, respectively, for learning real to Mary mappings (i.e., classification problems involving realvalued input attributes and multiple output classes). We prove the convergence of these algorithms and empirically demonstrate their applicability to practical pattern classification problems. Additionally, we show how the incorporation of a local pruning step can eliminate several redundant neurons from MTilingreal networks.
Hybrid Neural Systems
, 2000
"... This chapter provides an introduction to the field of hybrid neural systems. Hybrid neural systems are computational systems which are based mainly on artificial neural networks but also allow a symbolic interpretation, or interaction with symbolic components. In this overview, we will describe rece ..."
Abstract

Cited by 44 (10 self)
 Add to MetaCart
This chapter provides an introduction to the field of hybrid neural systems. Hybrid neural systems are computational systems which are based mainly on artificial neural networks but also allow a symbolic interpretation, or interaction with symbolic components. In this overview, we will describe recent results of hybrid neural systems. We will give a brief overview of the main methods used, outline the work that is presented here, and provide additional references. We will also highlight some important general issues and trends.
An Overview Of Strategies For Neurosymbolic Integration
, 1995
"... This paper will give an overview of the various approaches to neurosymbolic integration. Roughly, these can be divided into two strategies: unified strategies aim at attaining neural and symbolic capabilities using neural networks alone, while hybrid strategies combine neural networks with symbolic ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
This paper will give an overview of the various approaches to neurosymbolic integration. Roughly, these can be divided into two strategies: unified strategies aim at attaining neural and symbolic capabilities using neural networks alone, while hybrid strategies combine neural networks with symbolic models such as expert systems, casebased reasoning systems, 2 Chapter 2 and decision trees. These two approaches form the main subtrees of the classification hierarchy depicted in Figure 1. Symbol Proc. Neuronal Unified approach Symbol Proc. hybrids Connectionist Localist Hybrid approach Combined L/D Neurosymbolic integration Functional Chainprocessing Translational Subprocessing hybrids Metaprocessing Distributed Coprocessing Figure 1 Classification of integrated neurosymbolic systems.
Learning Team Strategies: Soccer Case Studies
 Machine Learning
, 1998
"... . We use simulated soccer to study multiagent learning. Each team's players (agents) share action set and policy, but may behave differently due to positiondependent inputs. All agents making up a team are rewarded or punished collectively in case of goals. We conduct simulations with varying team ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
. We use simulated soccer to study multiagent learning. Each team's players (agents) share action set and policy, but may behave differently due to positiondependent inputs. All agents making up a team are rewarded or punished collectively in case of goals. We conduct simulations with varying team sizes, and compare several learning algorithms: TDQ learning with linear neural networks (TDQ), Probabilistic Incremental Program Evolution (PIPE), and a PIPE version that learns by coevolution (COPIPE). TDQ is based on learning evaluation functions (EFs) mapping input/action pairs to expected reward. PIPE and COPIPE search policy space directly. They use adaptive probability distributions to synthesize programs that calculate action probabilities from current inputs. Our results show that linear TDQ encounters several difficulties in learning appropriate shared EFs. PIPE and COPIPE, however, do not depend on EFs and find good policies faster and more reliably. This suggests that in s...