Results 1  10
of
186
Feature Subset Selection Using A Genetic Algorithm
, 1997
"... : Practical pattern classification and knowledge discovery problems require selection of a subset of attributes or features (from a much larger set) to represent the patterns to be classified. This is due to the fact that the performance of the classifier (usually induced by some learning algorithm) ..."
Abstract

Cited by 246 (7 self)
 Add to MetaCart
(Show Context)
: Practical pattern classification and knowledge discovery problems require selection of a subset of attributes or features (from a much larger set) to represent the patterns to be classified. This is due to the fact that the performance of the classifier (usually induced by some learning algorithm) and the cost of classification are sensitive to the choice of the features used to construct the classifier. Exhaustive evaluation of possible feature subsets is usually infeasible in practice because of the large amount of computational effort required. Genetic algorithms, which belong to a class of randomized heuristic search techniques, offer an attractive approach to find nearoptimal solutions to such optimization problems. This paper presents an approach to feature subset selection using a genetic algorithm. Some advantages of this approach include the ability to accommodate multiple criteria such as accuracy and cost of classification into the feature selection process and to find fe...
Extracting treestructured representations of trained networks
 Advances in Neural Information Processing Systems
, 1996
"... ..."
(Show Context)
Extracting Comprehensible Models from Trained Neural Networks
, 1996
"... To Mom, Dad, and Susan, for their support and encouragement. ..."
Abstract

Cited by 80 (3 self)
 Add to MetaCart
(Show Context)
To Mom, Dad, and Susan, for their support and encouragement.
Using Sampling and Queries to Extract Rules from Trained Neural Networks
 In Proceedings of the Eleventh International Conference on Machine Learning
, 1994
"... Concepts learned by neural networks are difficult to understand because they are represented using large assemblages of realvalued parameters. One approach to understanding trained neural networks is to extract symbolic rules that describe their classification behavior. There are several existing r ..."
Abstract

Cited by 79 (3 self)
 Add to MetaCart
(Show Context)
Concepts learned by neural networks are difficult to understand because they are represented using large assemblages of realvalued parameters. One approach to understanding trained neural networks is to extract symbolic rules that describe their classification behavior. There are several existing ruleextraction approaches that operate by searching for such rules. We present a novel method that casts rule extraction not as a search problem, but instead as a learning problem. In addition to learning from training examples, our method exploits the property that networks can be efficiently queried. We describe algorithms for extracting both conjunctive and MofN rules, and present experiments that show that our method is more efficient than conventional searchbased approaches. 1 INTRODUCTION A problem that arises when neural networks are used for supervised learning tasks is that, after training, it is usually difficult to understand the concept representations formed by the networks....
A new methodology of extraction, optimization and application of crisp and fuzzy logical rules
 IEEE TRANSACTIONS ON NEURAL NETWORKS
, 2001
"... A new methodology of extraction, optimization, and application of sets of logical rules is described. Neural networks are used for initial rule extraction, local, or global minimization procedures for optimization, and Gaussian uncertainties of measurements are assumed during application of logical ..."
Abstract

Cited by 54 (24 self)
 Add to MetaCart
A new methodology of extraction, optimization, and application of sets of logical rules is described. Neural networks are used for initial rule extraction, local, or global minimization procedures for optimization, and Gaussian uncertainties of measurements are assumed during application of logical rules. Algorithms for extraction of logical rules from data with realvalued features require determination of linguistic variables or membership functions. Contextdependent membership functions for crisp and fuzzy linguistic variables are introduced and methods of their determination described. Several neural and machine learning methods of logical rule extraction generating initial rules are described, based on constrained multilayer perceptron, networks with localized transfer functions or on separability criteria for determination of linguistic variables. A tradeoff between accuracy/simplicity is explored at the rule extraction stage and between rejection/error level at the optimization stage. Gaussian uncertainties of measurements are assumed during application of crisp logical rules, leading to “soft trapezoidal” membership functions and allowing to optimize the linguistic variables using gradient procedures. Numerous applications of this methodology to benchmark and reallife problems are reported and very simple crisp logical rules for many datasets provided.
Hybrid Neural Systems
, 2000
"... This chapter provides an introduction to the field of hybrid neural systems. Hybrid neural systems are computational systems which are based mainly on artificial neural networks but also allow a symbolic interpretation, or interaction with symbolic components. In this overview, we will describe rece ..."
Abstract

Cited by 53 (11 self)
 Add to MetaCart
(Show Context)
This chapter provides an introduction to the field of hybrid neural systems. Hybrid neural systems are computational systems which are based mainly on artificial neural networks but also allow a symbolic interpretation, or interaction with symbolic components. In this overview, we will describe recent results of hybrid neural systems. We will give a brief overview of the main methods used, outline the work that is presented here, and provide additional references. We will also highlight some important general issues and trends.
Symbolic knowledge extraction from trained neural networks: A sound approach
, 2001
"... Although neural networks have shown very good performance in many application domains, one of their main drawbacks lies in the incapacity to provide an explanation for the underlying reasoning mechanisms. The "explanation capability" of neural networks can be achieved by the extraction of ..."
Abstract

Cited by 53 (8 self)
 Add to MetaCart
Although neural networks have shown very good performance in many application domains, one of their main drawbacks lies in the incapacity to provide an explanation for the underlying reasoning mechanisms. The "explanation capability" of neural networks can be achieved by the extraction of symbolic knowledge. In this paper, we present a new method of extraction that captures nonmonotonic rules encoded in the network, and prove that such a method is sound. We start by discussing some of the main problems of knowledge extraction methods. We then discuss how these problems may be ameliorated. To this end, a partial ordering on the set of input vectors of a network is defined, as well as a number of pruning and simplification rules. The pruning rules are then used to reduce the search space of the extraction algorithm during a pedagogical extraction, whereas the simplification rules are used to reduce the size of the extracted set of rules. We show that, in the case of regular networks, the extraction algorithm is sound and complete. We proceed to extend the extraction algorithm to the class of nonregular networks, the general case. We show that nonregular networks always contain regularities in their subnetworks. As a result, the underlying extraction method for regular networks can be applied, but now in a decompositional fashion. In order to combine the sets of rules extracted from each subnetwork into the final set of rules, we use a method whereby we are able to keep the soundness of the extraction algorithm. Finally, we present the results of an empirical analysis of the extraction system, using traditional examples and realworld application problems. The results have shown that a very high fidelity between the extracted set of rules and the network can be achieved....
Constructive Neural Network Learning Algorithms for Pattern Classification
, 2000
"... Constructive learning algorithms offer an attractive approach for the incremental construction of nearminimal neuralnetwork architectures for pattern classification. They help overcome the need for ad hoc and often inappropriate choices of network topology in algorithms that search for suitable we ..."
Abstract

Cited by 52 (14 self)
 Add to MetaCart
Constructive learning algorithms offer an attractive approach for the incremental construction of nearminimal neuralnetwork architectures for pattern classification. They help overcome the need for ad hoc and often inappropriate choices of network topology in algorithms that search for suitable weights in a priori fixed network architectures. Several such algorithms are proposed in the literature and shown to converge to zero classification errors (under certain assumptions) on tasks that involve learning a binary to binary mapping (i.e., classification problems involving binaryvalued input attributes and two output categories). We present two constructive learning algorithms MPyramidreal and MTilingreal that extend the pyramid and tiling algorithms, respectively, for learning real to Mary mappings (i.e., classification problems involving realvalued input attributes and multiple output classes). We prove the convergence of these algorithms and empirically demonstrate their applicability to practical pattern classification problems. Additionally, we show how the incorporation of a local pruning step can eliminate several redundant neurons from MTilingreal networks.
An Overview Of Strategies For Neurosymbolic Integration
, 1995
"... This paper will give an overview of the various approaches to neurosymbolic integration. Roughly, these can be divided into two strategies: unified strategies aim at attaining neural and symbolic capabilities using neural networks alone, while hybrid strategies combine neural networks with symbolic ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
This paper will give an overview of the various approaches to neurosymbolic integration. Roughly, these can be divided into two strategies: unified strategies aim at attaining neural and symbolic capabilities using neural networks alone, while hybrid strategies combine neural networks with symbolic models such as expert systems, casebased reasoning systems, 2 Chapter 2 and decision trees. These two approaches form the main subtrees of the classification hierarchy depicted in Figure 1. Symbol Proc. Neuronal Unified approach Symbol Proc. hybrids Connectionist Localist Hybrid approach Combined L/D Neurosymbolic integration Functional Chainprocessing Translational Subprocessing hybrids Metaprocessing Distributed Coprocessing Figure 1 Classification of integrated neurosymbolic systems.
Towards a high performance neural branch predictor
 In Proceedings of the International Joint Conference on Neural Networks
, 1999
"... Abstract: The main aim of this short paper is to propose a new branch prediction approach called by us "neural branch prediction". We developed a first neural predictor model based on a simple neural learning algorithm, known as Learning Vector Quantization algorithm. Based on a tr ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
(Show Context)
Abstract: The main aim of this short paper is to propose a new branch prediction approach called by us &quot;neural branch prediction&quot;. We developed a first neural predictor model based on a simple neural learning algorithm, known as Learning Vector Quantization algorithm. Based on a trace driven simulation method we investigated the influences of the learning step and training processes. Also we compared the neural predictor with a powerful classical predictor and we establish that they result in close performances. Therefore, we conclude that in the nearest future it might be necessary to model and simulate other more powerful neural adaptive predictors, based on more complex neural networks architectures or even time series concepts, in order to obtain better prediction accuracies compared with the previous known schemes. Key Words: MII Architectures, Branch prediction,Trace driven simulation, Neural algorithms (LVQ, MLP) 1.