Results 1  10
of
35
The Extraction of Refined Rules from KnowledgeBased Neural Networks
 Machine Learning
, 1993
"... Neural networks, despite their empiricallyproven abilities, have been little used for the refinement of existing knowledge because this task requires a threestep process. First, knowledge in some form must be inserted into a neural network. Second, the network must be refined. Third, knowledge mus ..."
Abstract

Cited by 198 (4 self)
 Add to MetaCart
Neural networks, despite their empiricallyproven abilities, have been little used for the refinement of existing knowledge because this task requires a threestep process. First, knowledge in some form must be inserted into a neural network. Second, the network must be refined. Third, knowledge must be extracted from the network. We have previously described a method for the first step of this process. Standard neural learning techniques can accomplish the second step. In this paper, we propose and empirically evaluate a method for the final, and possibly most difficult, step. This method efficiently extracts symbolic rules from trained neural networks. The four major results of empirical tests of this method are that the extracted rules: (1) closely reproduce (and can even exceed) the accuracy of the network from which they are extracted; (2) are superior to the rules produced by methods that directly refine symbolic rules; (3) are superior to those produced by previous techniques fo...
KnowledgeBased Artificial Neural Networks
, 1994
"... Hybrid learning methods use theoretical knowledge of a domain and a set of classified examples to develop a method for accurately classifying examples not seen during training. The challenge of hybrid learning systems is to use the information provided by one source of information to offset informat ..."
Abstract

Cited by 145 (13 self)
 Add to MetaCart
Hybrid learning methods use theoretical knowledge of a domain and a set of classified examples to develop a method for accurately classifying examples not seen during training. The challenge of hybrid learning systems is to use the information provided by one source of information to offset information missing from the other source. By so doing, a hybrid learning system should learn more effectively than systems that use only one of the information sources. KBANN(KnowledgeBased Artificial Neural Networks) is a hybrid learning system built on top of connectionist learning techniques. It maps problemspecific "domain theories", represented in propositional logic, into neural networks and then refines this reformulated knowledge using backpropagation. KBANN is evaluated by extensive empirical tests on two problems from molecular biology. Among other results, these tests show that the networks created by KBANN generalize better than a wide variety of learning systems, as well as several t...
Symbolic and neural learning algorithms: an experimental comparison
 Machine Learning
, 1991
"... Abstract Despite the fact that many symbolic and neural network (connectionist) learning algorithms address the same problem of learning from classified examples, very little is known regarding their comparative strengths and weaknesses. Experiments comparing the ID3 symbolic learning algorithm with ..."
Abstract

Cited by 99 (6 self)
 Add to MetaCart
Abstract Despite the fact that many symbolic and neural network (connectionist) learning algorithms address the same problem of learning from classified examples, very little is known regarding their comparative strengths and weaknesses. Experiments comparing the ID3 symbolic learning algorithm with the perception and backpropagation neural learning algorithms have been performed using five large, realworld data sets. Overall, backpropagation performs slightly better than the other two algorithms in terms of classification accuracy on new examples, but takes much longer to train. Experimental results suggest that backpropagation can work significantly better on data sets containing numerical data. Also analyzed empirically are the effects of (1) the amount of training data, (2) imperfect training examples, and (3) the encoding of the desired outputs. Backpropagation occasionally outperforms the other two systems when given relatively small amounts of training data. It is slightly more accurate than ID3 when examples are noisy or incompletely specified. Finally, backpropagation more effectively utilizes a "distributed " output encoding.
Constructing Deterministic FiniteState Automata in Recurrent Neural Networks
 Journal of the ACM
, 1996
"... Recurrent neural networks that are trained to behave like deterministic finitestate automata (DFAs) can show deteriorating performance when tested on long strings. This deteriorating performance can be attributed to the instability of the internal representation of the learned DFA states. The use o ..."
Abstract

Cited by 70 (16 self)
 Add to MetaCart
Recurrent neural networks that are trained to behave like deterministic finitestate automata (DFAs) can show deteriorating performance when tested on long strings. This deteriorating performance can be attributed to the instability of the internal representation of the learned DFA states. The use of a sigmoidal discriminant function together with the recurrent structure contribute to this instability. We prove that a simple algorithm can construct secondorder recurrent neural networks with a sparse interconnection topology and sigmoidal discriminant function such that the internal DFA state representations are stable, i.e. the constructed network correctly classifies strings of arbitrary length. The algorithm is based on encoding strengths of weights directly into the neural network. We derive a relationship between the weight strength and the number of DFA states for robust string classification. For a DFA with n states and m input alphabet symbols, the constructive algorithm genera...
A framework for combining symbolic and neural learning
, 1992
"... This article describes an approach to combining symbolic and connectionist approaches to machine learning. A threestage framework is presented and the research of several groups is reviewed with respect to this framework. The first stage involves the insertion of symbolic knowledge into neural netw ..."
Abstract

Cited by 56 (1 self)
 Add to MetaCart
This article describes an approach to combining symbolic and connectionist approaches to machine learning. A threestage framework is presented and the research of several groups is reviewed with respect to this framework. The first stage involves the insertion of symbolic knowledge into neural networks, the second addresses the refinement of this prior knowledge in its neural representation, while the third concerns the extraction of the refined symbolic knowledge. Experimental results and open research issues are discussed.
Interpretation of Artificial Neural Networks: . . .
, 1992
"... We propose and empirically evaluate a method for the extraction of expertcomprehensible rules from trained neural networks. Our method operates in the context of a threestep process for learning that uses rulebased domain knowledge in combination with neural networks. Empirical tests using realwo ..."
Abstract

Cited by 55 (5 self)
 Add to MetaCart
We propose and empirically evaluate a method for the extraction of expertcomprehensible rules from trained neural networks. Our method operates in the context of a threestep process for learning that uses rulebased domain knowledge in combination with neural networks. Empirical tests using realworlds problems from molecular biology show that the rules our method extracts from trained neural networks: closely reproduce the accuracy of the network from which they came, are superior to the rules derived by a learning system that directly refines symbolic rules, and are expertcomprehensible.
Training SecondOrder Recurrent Neural Networks using Hints
 In Proceedings of the Ninth International Conference on Machine Learning
, 1992
"... We investigate a method for inserting rules into discretetime secondorder recurrent neural networks which are trained to recognize regular languages. The rules defining regular languages can be expressed in the form of transitions in the corresponding deterministic finitestate automaton. Insertin ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
We investigate a method for inserting rules into discretetime secondorder recurrent neural networks which are trained to recognize regular languages. The rules defining regular languages can be expressed in the form of transitions in the corresponding deterministic finitestate automaton. Inserting these rules as hints into networks with secondorder connections is straightforward. Our simulation results show that even weak hints seem to improve the convergence time by an order of magnitude. 1 MOTIVATION Often, we have a priori knowledge about a learning task and we wish to make effective use of this knowledge. We will discuss a method for inserting prior knowledge into recurrent neural networks. For an initial testbed, we will train networks to recognize regular languages, thus behaving like deterministic finitestate automata ([Giles 91], [Giles 92]). We show that, as might be expected, the convergence time is significantly decreased by placing partial knowledge about a determinist...
Refining Symbolic Knowledge Using Neural Networks
 In Proceedings of the International Workshop on Multistrategy Learning
, 1991
"... This paper uses a special notation for specifying locations in a DNA sequence. The idea is to number locations with respect to a fixed, biologicallymeaningful, reference point. Negative numbers indicate sites preceding the reference point (by biological convention, this appears on the left) while p ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
This paper uses a special notation for specifying locations in a DNA sequence. The idea is to number locations with respect to a fixed, biologicallymeaningful, reference point. Negative numbers indicate sites preceding the reference point (by biological convention, this appears on the left) while positive numbers indicate sites following the reference point. (Zero is not used.) Figure 9 illustrates this numbering scheme. Rules use this referencing scheme by stating a position with respect to the reference location, denoted by `@', and the giving a subsequence in the positive direction. For example @4`GGT' refers to the three nucleotide long sequence in Figure 9 that begins at position4 and ends at position2. In addition to this notation for specifying locations a DNA sequence, Table 3 specifies a standard coding scheme for referring to any possible combination of nucleotides using a single letter (IUB Nomenclature Committee, 1985). This scheme is compatible with the codes used by the EMBL, GenBank, and PIR data libraries, three major collections of data for molecular biology. 4.2 Promoter recognition
Rerepresenting and Restructuring Domain Theories: A Constructive Induction Approach
 Journal of Artificial Intelligence Research
, 1995
"... Theory revision integrates inductive learning and background knowledge by combining training examples with a coarse domain theory to produce a more accurate theory. There are two challenges that theory revision and other theoryguided systems face. First, a representation language appropriate for th ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
Theory revision integrates inductive learning and background knowledge by combining training examples with a coarse domain theory to produce a more accurate theory. There are two challenges that theory revision and other theoryguided systems face. First, a representation language appropriate for the initial theory may be inappropriate for an improved theory. While the original representation may concisely express the initial theory, a more accurate theory forced to use that same representation may be bulky, cumbersome, and difficult to reach. Second, a theory structure suitable for a coarse domain theory may be insufficient for a finetuned theory. Systems that produce only small, local changes to a theory have limited value for accomplishing complex structural alterations that may be required. Consequently, advanced theoryguided learning systems require flexible representation and flexible structure. An analysis of various theory revision systems and theoryguided learning systems ...
Cascade ARTMAP: Integrating Neural Computation and Symbolic Knowledge Processing
 IEEE Transactions on Neural Networks
, 1997
"... Abstract — This paper introduces a hybrid system termed cascade adaptive resonance theory mapping (ARTMAP) that incorporates symbolic knowledge into neuralnetwork learning and recognition. Cascade ARTMAP, a generalization of fuzzy ARTMAP, represents intermediate attributes and rule cascades of rule ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
Abstract — This paper introduces a hybrid system termed cascade adaptive resonance theory mapping (ARTMAP) that incorporates symbolic knowledge into neuralnetwork learning and recognition. Cascade ARTMAP, a generalization of fuzzy ARTMAP, represents intermediate attributes and rule cascades of rulebased knowledge explicitly and performs multistep inferencing. A rule insertion algorithm translates if–then symbolic rules into cascade ARTMAP architecture. Besides that initializing networks with prior knowledge can improve predictive accuracy and learning efficiency, the inserted symbolic knowledge can be refined and enhanced by the cascade ARTMAP learning algorithm. By preserving symbolic rule form during learning, the rules extracted from cascade ARTMAP can be compared directly with the originally inserted rules. Simulations on an animal identification problem indicate that a priori symbolic knowledge always improves system performance, especially with a small training set. Benchmark study on a DNA promoter recognition problem shows that with the added advantage of fast learning, cascade ARTMAP rule insertion and refinement algorithms produce performance superior to those of other machine learning systems and an alternative hybrid system known as knowledgebased artificial neural network (KBANN). Also, the rules extracted from cascade ARTMAP are more accurate and much cleaner than the NofM rules extracted from KBANN. Index Terms—ARTMAP, hybrid system, promotor recognition, rule extraction, rule insertion, rule refinement. I.