Results 1  10
of
10
The Neural Network Pushdown Automaton: Model, Stack and Learning Simulations
, 1993
"... In order for neural networks to learn complex languages or grammars, they must have sufficient computational power or resources to recognize or generate such languages. Though many approaches to effectively utilizing the computational power of neural networks have been discussed, an obvious one is t ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
In order for neural networks to learn complex languages or grammars, they must have sufficient computational power or resources to recognize or generate such languages. Though many approaches to effectively utilizing the computational power of neural networks have been discussed, an obvious one is to couple a recurrent neural network with an external stack memory in effect creating a neural network pushdown automata (NNPDA). This NNPDA generalizes the concept of a recurrent network so that the network becomes a more complex computing structure. This paper discusses in detail a NNPDA its construction, how it can be trained and how useful symbolic information can be extracted from the trained network. To effectively couple the external stack to the neural network, an optimization method is developed which uses an error function that connects the learning of the state automaton of the neural network to the learning of the operation of the external stack: push, pop, and nooperation. To minimize the error function using gradient descent learning, an analog stack is designed such that the action and storage of information in the stack are continuous. One interpretation of a continuous stack is the probabilistic storage of and action on data. After training on sample strings of an unknown source grammar, a quantization procedure extracts from the analog stack and neural network a discrete pushdown automata (PDA). Simulations show that in learning deterministic contextfree grammars the balanced parenthesis language, 1 n 0 n, and the deterministic Palindrome the extracted PDA is correct in the sense that it can correctly recognize unseen strings of arbitrary length. In addition, the extracted PDAs can be shown to be identical or equivalent to the PDAs of the source grammars which were used to generate the training strings.
Extracting symbolic knowledge from recurrent neural networksA fuzzy logic approach
 Fuzzy Sets and Systems, Volume 160, Issue
, 2009
"... Considerable research has been devoted to the integration of fuzzy logic (FL) tools with classic artificial intelligence (AI) paradigms. One reason for this is that FL provides powerful mechanisms for handling and processing symbolic information stated using natural language. In this respect, fuzzy ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Considerable research has been devoted to the integration of fuzzy logic (FL) tools with classic artificial intelligence (AI) paradigms. One reason for this is that FL provides powerful mechanisms for handling and processing symbolic information stated using natural language. In this respect, fuzzy rulebased systems are whiteboxes, as they process information in a form that is easy to understand, verify and, if necessary, refine. The synergy between artificial neural networks (ANNs), which are notorious for their blackbox character, and FL proved to be particularly successful. Such a synergy allows combining the powerful learningfromexamples capability of ANNs with the highlevel symbolic information processing of FL systems. In this paper, we present a new approach for extracting symbolic information from recurrent neural networks (RNNs). The approach is based on the mathematical equivalence between a specific fuzzy rulebase and functions composed of sums of sigmoids. We show that this equivalence can be used to provide a comprehensible explanation of the RNN functioning. We demonstrate the applicability of our approach by using it to extract the knowledge embedded within an RNN trained to recognize a formal language.
Neural Fuzzy Preference Integration using Neural Preference Moore Machines
 International Journal of Neural Systems
, 2000
"... This paper describes preference classes and preference Moore machines as a basis for integrating different hybrid neural representations. Preference classes are shown to pro vide a basic link between neural preferences and fuzzy representations at the preference class level. Preference Moore mac ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
This paper describes preference classes and preference Moore machines as a basis for integrating different hybrid neural representations. Preference classes are shown to pro vide a basic link between neural preferences and fuzzy representations at the preference class level. Preference Moore machines provide a link between recurrent neural networks and symbolic transducers at the preference Moore machine level. We demonstrate how the concepts of preference classes and preference Moore machines can be used to interpret neural network representations and to integrate knowledge from hybrid neural represen tations. One main contribution of this paper is the introduction and analysis of neural preference Moore machines and their link to a fuzzy interpretation. Furthermore, we il lustrate the interpretation and combination of various neural preference Moore machines with additional real world examples.
Thornber, Equivalence in knowledge representation: automata, recurrent neural networks, and dynamical fuzzy systems
 in: Proceedings of the IEEE
, 1999
"... Neurofuzzy systems—the combination of artificial neural networks with fuzzy logic—have become useful in many application domains. However, conventional neurofuzzy models usually need enhanced representational power for applications that require context and state (e.g., speech, time series prediction ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Neurofuzzy systems—the combination of artificial neural networks with fuzzy logic—have become useful in many application domains. However, conventional neurofuzzy models usually need enhanced representational power for applications that require context and state (e.g., speech, time series prediction, control). Some of these applications can be readily modeled as finite state automata. Previously, it was proved that deterministic finite state automata (DFA) can be synthesized by or mapped into recurrent neural networks by directly programming the DFA structure into the weights of the neural network. Based on those results, a synthesis method is proposed for mapping fuzzy finite state automata (FFA) into recurrent neural networks. Furthermore, this mapping is suitable for direct implementation in very large scale integration (VLSI), i.e., the encoding of FFA as a generalization of the encoding of DFA in VLSI systems. The synthesis method requires FFA to undergo a transformation prior to being mapped into recurrent networks. The neurons are provided with an enriched functionality in order to accommodate a fuzzy representation of FFA states. This enriched neuron functionality also permits fuzzy parameters of FFA to be directly represented as parameters of the neural network. We also prove the stability of fuzzy finite state dynamics of the constructed neural networks for finite values of network weight and, through simulations, give empirical validation of the proofs. Hence, we prove various knowledge equivalence representations between neural and fuzzy systems and models of automata.
A new approach to knowledgebased design of recurrent neural networks
 IEEE Trans. Neural Networks
, 2008
"... Abstract — A major drawback of artificial neural networks (ANNs) is their blackbox character. This is especially true for recurrent neural networks (RNNs) because of their intricate feedback connections. In particular, given a problem and some initial information concerning its solution, it is not ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Abstract — A major drawback of artificial neural networks (ANNs) is their blackbox character. This is especially true for recurrent neural networks (RNNs) because of their intricate feedback connections. In particular, given a problem and some initial information concerning its solution, it is not at all clear how to design an RNN that is suitable for solving this problem. In this paper, we consider a fuzzy rulebase with a special structure, referred to as the fuzzy allpermutations rulebase (FARB). Inferring the FARB yields an inputoutput mapping that is mathematically equivalent to that of an RNN. We use this equivalence to develop two new knowledgebased design methods for RNNs. The first method, referred to as the direct approach, is based on stating the desired functioning of the RNN in terms of several sets of symbolic rules, each one corresponding to a subnetwork. Each set is then transformed into a suitable FARB. The second method is based on first using the direct approach to design a library of simple modules, such as counters or comparators, and realize them using RNNs. Once designed, the correctness of each RNN can be verified. Then, the initial design problem is solved by using these basic modules as building blocks. This yields a modular and systematic approach for knowledgebased design of RNNs. We demonstrate the efficiency of these approaches by designing RNNs that recognize both regular and nonregular formal languages.
Equivalence in Knowledge Representation: Automata, Recurrent Neural Networks, and Dynamical Fuzzy Systems
 PROCEEDINGS OF THE IEEE
, 1999
"... Neurofuzzy systemsthe combination of artificial neural networks with fuzzy logichave become useful in many application domains. However, conventional neurofuzzy models usually need enhanced representation power for applications that require context and state (e.g., speech, time series prediction, ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Neurofuzzy systemsthe combination of artificial neural networks with fuzzy logichave become useful in many application domains. However, conventional neurofuzzy models usually need enhanced representation power for applications that require context and state (e.g., speech, time series prediction, control). Some of these applications can be readily modeled as finite state automata. Previously, it was proved that deterministic finite state automata (DFA) can be synthesized by or mapped into recurrent neural networks by directly programming the DFA structure into the weights of the neural network. Based on those results, a synthesis method is proposed for mapping fuzzy finite state automata (FFA) into recurrent neural networks. Furthermore, this mapping is suitable for direct implementation in very large scale integration (VLSI), i.e., the encoding of FFA as a generalization of the encoding of DFA in VLSI systems. The synthesis method requires FFA to undergo a transformation prior to being mapped into recurrent networks. The neurons are provided with an enriched functionality in order to accommodate a fuzzy representation of FFA states. This enriched neuron functionality also permits fuzzy parameters of FFA to be directly represented as parameters of the neural network. We also prove the stability of fuzzy finite state dynamics of the constructed neural networks for finite values of network weight and, through simulations, give empirical validation of the proofs. Hence, we prove various knowledge equivalence representations between neural and fuzzy systems and models of automata.
Recurrent Neural Networks Learn Deterministic Representations of Fuzzy FiniteState Automata
, 1998
"... The paradigm of deterministic finitestate automata (DFAs) and their corresponding regular languages have been shown to be very useful for addressing fundamental issues in recurrent neural networks. The issues that have been addressed include knowledge representation, extraction, and refinement as w ..."
Abstract
 Add to MetaCart
The paradigm of deterministic finitestate automata (DFAs) and their corresponding regular languages have been shown to be very useful for addressing fundamental issues in recurrent neural networks. The issues that have been addressed include knowledge representation, extraction, and refinement as well development of advanced learning algorithms. Recurrent neural networks are also very promising tool for modeling discrete dynamical systems through learning, particularly when partial prior knowledge is available. The drawback of the DFA paradigm is that it is inappropriate for modeling vague or uncertain dynamics; however, many realworld applications deal with vague or uncertain information. One way to model vague information in a dynamical system is to allow for vague state transitions, i.e. the system may be in several states at the same time with varying degree of certainty; fuzzy finitestate automata (FFAs) are a formal equivalent of such systems. It is therefore of interest to study how uncertainty in the form of FFAs can be modeled by deterministic recurrent neural networks. We have previously proven that secondorder recurrent neural networks are able to represent FFAs, i.e. recurrent networks can be constructed that assign fuzzy memberships to input strings with arbitrary accuracy. In such networks, the classification performance is independent of the string length. In this paper, we are concerned with recurrent neural networks that have been trained to behave like FFAs.In particular, we are interested in the internal representation of fuzzy states and state transitions and in the extraction of knowledge in symbolic form.
Knowledge Extraction from Trained Neural Networks
"... ent neural networks. The machine algorithm developed would be able first to extract the relevant regularities discovered by the simple recurrent network trained on multidimensional data set and second to interpret these regularities in an easytounderstand form. The derived models of regularities ..."
Abstract
 Add to MetaCart
ent neural networks. The machine algorithm developed would be able first to extract the relevant regularities discovered by the simple recurrent network trained on multidimensional data set and second to interpret these regularities in an easytounderstand form. The derived models of regularities must be accuracy and involve the most significant features from input variables. To be accuracy, neuralnetwork model complexity evaluated by number their units and synaptic links must be optimal. Reducing network size makes it easier to understand the regularities contained in the model. To be represented in the humanunderstand form, the extracted models would be interpreted as a concise set of the symbolic rules. Such the rules can be generated in case if the different levels of abstractions would be used to represent the extracted knowledge. To solve these problems, various research groups have suggested the several approaches. Review of RuleExt
Insertion of Prior Knowledge
 A FIELD GUIDE TO DYNAMICAL RECURRENT NETWORKS
, 2001
"... In this chapter we focus on methods for injecting prior knowledge (represented in the form of finite automata) into adaptive recurrent networks. Several algorithms and architectures are described, including firstorder, secondorder, and RBFbased recurrent neural nets. ..."
Abstract
 Add to MetaCart
In this chapter we focus on methods for injecting prior knowledge (represented in the form of finite automata) into adaptive recurrent networks. Several algorithms and architectures are described, including firstorder, secondorder, and RBFbased recurrent neural nets.
Summary Of Research
"... to extract the relevant regularities from web documents used to train a simple recurrent network and second to interpret these regularities in an easytounderstand form. Extracted models of regularities have to be accuracy and involve the most significant features used to describe text documents. H ..."
Abstract
 Add to MetaCart
to extract the relevant regularities from web documents used to train a simple recurrent network and second to interpret these regularities in an easytounderstand form. Extracted models of regularities have to be accuracy and involve the most significant features used to describe text documents. Highest accuracy would be achieved when, first, neural network complexity evaluated by number its units and their synaptic links has been optimal, and second, a user has not configured the neural network (i.e., neither connection interconnection, nor the number of units and layers, nor memory order). To be represented in the humanunderstand form, the extracted models have to be interpreted as a concise set of the symbolic rules. Such the rules can be generated in case if the different levels of abstractions would be used to represent the knowledge extracted from documents. The goal of project is achieved by applying the selforganizing neural networks to extract the knowledge from web docu