Results 1 
4 of
4
A General Framework for Adaptive Processing of Data Structures
 IEEE TRANSACTIONS ON NEURAL NETWORKS
, 1998
"... A structured organization of information is typically required by symbolic processing. On the other hand, most connectionist models assume that data are organized according to relatively poor structures, like arrays or sequences. The framework described in this paper is an attempt to unify adaptive ..."
Abstract

Cited by 117 (46 self)
 Add to MetaCart
A structured organization of information is typically required by symbolic processing. On the other hand, most connectionist models assume that data are organized according to relatively poor structures, like arrays or sequences. The framework described in this paper is an attempt to unify adaptive models like artificial neural nets and belief nets for the problem of processing structured information. In particular, relations between data variables are expressed by directed acyclic graphs, where both numerical and categorical values coexist. The general framework proposed in this paper can be regarded as an extension of both recurrent neural networks and hidden Markov models to the case of acyclic graphs. In particular we study the supervised learning problem as the problem of learning transductions from an input structured space to an output structured space, where transductions are assumed to admit a recursive hidden statespace representation. We introduce a graphical formalism for r...
Extracting symbolic knowledge from recurrent neural networksA fuzzy logic approach
 Fuzzy Sets and Systems, Volume 160, Issue
, 2009
"... Considerable research has been devoted to the integration of fuzzy logic (FL) tools with classic artificial intelligence (AI) paradigms. One reason for this is that FL provides powerful mechanisms for handling and processing symbolic information stated using natural language. In this respect, fuzzy ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Considerable research has been devoted to the integration of fuzzy logic (FL) tools with classic artificial intelligence (AI) paradigms. One reason for this is that FL provides powerful mechanisms for handling and processing symbolic information stated using natural language. In this respect, fuzzy rulebased systems are whiteboxes, as they process information in a form that is easy to understand, verify and, if necessary, refine. The synergy between artificial neural networks (ANNs), which are notorious for their blackbox character, and FL proved to be particularly successful. Such a synergy allows combining the powerful learningfromexamples capability of ANNs with the highlevel symbolic information processing of FL systems. In this paper, we present a new approach for extracting symbolic information from recurrent neural networks (RNNs). The approach is based on the mathematical equivalence between a specific fuzzy rulebase and functions composed of sums of sigmoids. We show that this equivalence can be used to provide a comprehensible explanation of the RNN functioning. We demonstrate the applicability of our approach by using it to extract the knowledge embedded within an RNN trained to recognize a formal language.
FiniteState Computation in Analog Neural Networks: Steps Towards Biologically Plausible Models?
, 2001
"... Finitestate machines are the most pervasive models of computation, not only in theoretical computer science, but also in all of its applications to reallife problems, and constitute the best characterized computational model. On the other hand, neural networks proposed almost sixty years ag ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Finitestate machines are the most pervasive models of computation, not only in theoretical computer science, but also in all of its applications to reallife problems, and constitute the best characterized computational model. On the other hand, neural networks proposed almost sixty years ago by McCulloch and Pitts as a simplified model of nervous activity in living beings have evolved into a great variety of socalled artificial neural networks. Artificial neural networks have become a very successful tool for modelling and problem solving because of their builtin learning capability, but most of the progress in this field has occurred with models that are very removed from the behaviour of real, i.e., biological neural networks. This paper surveys the work that has established a connection between finitestate machines and (mainly discretetime recurrent) neural networks, and suggests possible ways to construct finitestate models in biologically plausible neural networks.
A new approach to knowledgebased design of recurrent neural networks
 IEEE Trans. Neural Networks
, 2008
"... Abstract — A major drawback of artificial neural networks (ANNs) is their blackbox character. This is especially true for recurrent neural networks (RNNs) because of their intricate feedback connections. In particular, given a problem and some initial information concerning its solution, it is not ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Abstract — A major drawback of artificial neural networks (ANNs) is their blackbox character. This is especially true for recurrent neural networks (RNNs) because of their intricate feedback connections. In particular, given a problem and some initial information concerning its solution, it is not at all clear how to design an RNN that is suitable for solving this problem. In this paper, we consider a fuzzy rulebase with a special structure, referred to as the fuzzy allpermutations rulebase (FARB). Inferring the FARB yields an inputoutput mapping that is mathematically equivalent to that of an RNN. We use this equivalence to develop two new knowledgebased design methods for RNNs. The first method, referred to as the direct approach, is based on stating the desired functioning of the RNN in terms of several sets of symbolic rules, each one corresponding to a subnetwork. Each set is then transformed into a suitable FARB. The second method is based on first using the direct approach to design a library of simple modules, such as counters or comparators, and realize them using RNNs. Once designed, the correctness of each RNN can be verified. Then, the initial design problem is solved by using these basic modules as building blocks. This yields a modular and systematic approach for knowledgebased design of RNNs. We demonstrate the efficiency of these approaches by designing RNNs that recognize both regular and nonregular formal languages.