Results 1  10
of
11
Rule Extraction from Recurrent Neural Networks: a Taxonomy and Review
 Neural Computation
, 2005
"... this paper, the progress of this development is reviewed and analysed in detail. In order to structure the survey and to evaluate the techniques, a taxonomy, specifically designed for this purpose, has been developed. Moreover, important open research issues are identified, that, if addressed pr ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
this paper, the progress of this development is reviewed and analysed in detail. In order to structure the survey and to evaluate the techniques, a taxonomy, specifically designed for this purpose, has been developed. Moreover, important open research issues are identified, that, if addressed properly, possibly can give the field a significant push forward
Mathematical Aspects of Neural Networks
 European Symposium of Artificial Neural Networks 2003
, 2003
"... In this tutorial paper about mathematical aspects of neural networks, we will focus on two directions: on the one hand, we will motivate standard mathematical questions and well studied theory of classical neural models used in machine learning. On the other hand, we collect some recent theoretic ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
In this tutorial paper about mathematical aspects of neural networks, we will focus on two directions: on the one hand, we will motivate standard mathematical questions and well studied theory of classical neural models used in machine learning. On the other hand, we collect some recent theoretical results (as of beginning of 2003) in the respective areas. Thereby, we follow the dichotomy offered by the overall network structure and restrict ourselves to feedforward networks, recurrent networks, and selforganizing neural systems, respectively.
Incremental Training of First Order Recurrent Neural Networks to Predict a ContextSensitive Language
, 2003
"... In recent years it has been shown that first order recurrent neural networks trained by gradientdescent can learn not only regular but also simple contextfree and contextsensitive languages. However, the success rate was generally low and severe instability issues were encountered. The present st ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
In recent years it has been shown that first order recurrent neural networks trained by gradientdescent can learn not only regular but also simple contextfree and contextsensitive languages. However, the success rate was generally low and severe instability issues were encountered. The present study examines the hypothesis that a combination of evolutionary hill climbing with incremental learning and a wellbalanced training set enables first order recurrent networks to reliably learn contextfree and mildly contextsensitive languages. In particular, we trained the networks to predict symbols in string sequences of the contextsensitive language Preprint submitted to Neural Networks 10 January 2003 1}. Comparative experiments with and without incremental learning indicated that incremental learning can accelerate and facilitate training. Furthermore, incrementally trained networks generally resulted in monotonic trajectories in hidden unit activation space, while the trajectories of nonincrementally trained networks were oscillating. The nonincrementally trained networks were more likely to generalise.
Constrained SecondOrder Recurrent Networks for FiniteState Automata Induction
 Proceedings of the 8th International Conference on Arti Neural Networks ICANN'98
"... This paper presents an improved training algorithm for secondorder dynamical recurrent networks applied to the problem of finitestate automata (FSA) induction. Secondorder networks allow for a natural encoding of finitestate automata in which each secondorder connection weight corresponds t ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
This paper presents an improved training algorithm for secondorder dynamical recurrent networks applied to the problem of finitestate automata (FSA) induction. Secondorder networks allow for a natural encoding of finitestate automata in which each secondorder connection weight corresponds to one transition in a finitestate automaton. In practice, however, when trained using gradient descent, these networks seldom assume this type of encoding and sophisticated algorithms must be used to extract the encoded automata. This paper suggests a simple modification to the standard error function for secondorder dynamical recurrent networks which encourages these networks to assume natural FSA encodings when trained using gradient descent. This obviates the need for clusterbased extraction techniques and provides a simple method for guaranteeing the stability of the network for arbitrarily long sequences. Initial results also suggest that fewer training strings must be prese...
Efficient encodings of finite automata in discretetime recurrent neural networks
"... A number of researchers have used discretetime recurrent neural nets (DTRNN) to learn finitestate machines (FSM) from samples of input and output strings; trained DTRNN usually show FSM behaviour forstrings up to a certain length, but not beyond; this is usually called instability. authors have sh ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
A number of researchers have used discretetime recurrent neural nets (DTRNN) to learn finitestate machines (FSM) from samples of input and output strings; trained DTRNN usually show FSM behaviour forstrings up to a certain length, but not beyond; this is usually called instability. authors have shown that DTRNN may actually behave as FSM for strings of any length and have devised strategies to construct such DTRNN. In these strategies, mstate deterministic FSM are encoded and the number of state units in the DTRNN is Θ(m). This paper shows that more efficient sigmoid DTRNN encodings exist for a subclass of deterministic finite automata (DFA), namely, when the size of an equivalent nondeterministic finite automata (NFA) is smaller, because nstate NFA may directly be encoded in DTRNN with a Θ(n) units.
Tutorial: Perspectives on Learning with RNNs
 in: Proc. ESANN, 2002
"... We present an overview of current lines of research on learning with recurrent neural networks (RNNs). Topics covered are: understanding and unification of algorithms, theoretical foundations, new efforts to circumvent gradient vanishing, new architectures, and fusion with other learning methods ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present an overview of current lines of research on learning with recurrent neural networks (RNNs). Topics covered are: understanding and unification of algorithms, theoretical foundations, new efforts to circumvent gradient vanishing, new architectures, and fusion with other learning methods and dynamical systems theory. The structuring guideline is to understand many new approaches as different efforts to regularize and thereby improve recurrent learning.
Recurrent Neural Networks Can Learn Simple, Approximate Regular Languages
"... A number of researchers have shown discretetime recurrent neural networks (DTRNN) to be capable of inferring deterministic finite automata (DFA) from sets of example and counterexample strings; however, discrete algorithmic methods are much better at this task and clearly outperform DTRNN in terms ..."
Abstract
 Add to MetaCart
A number of researchers have shown discretetime recurrent neural networks (DTRNN) to be capable of inferring deterministic finite automata (DFA) from sets of example and counterexample strings; however, discrete algorithmic methods are much better at this task and clearly outperform DTRNN in terms of space and time complexity. We show, however, how DTRNN may be used to learn not the exact language that explains the whole learning set but an approximate and much simpler language that explains a great majority of the examples by using simpler rules. This is accomplished by gradually varying the error function in such a way that the DTRNN is eventually allowed to classify clearly but incorrectly those strings that it has found to be difficult to learn, which are treated as exceptions. The results show that, in this way, the DTRNN usually manages to learn a simplified approximate language. 1
ARTICLE Communicated by Josh Bongard StateDependent Computation Using Coupled Recurrent Networks
"... Although conditional branching between possible behavioral states is a hallmark of intelligent behavior, very little is known about the neuronal mechanisms that support this processing. In a step toward solving this problem, we demonstrate by theoretical analysis and simulation how networks of richl ..."
Abstract
 Add to MetaCart
Although conditional branching between possible behavioral states is a hallmark of intelligent behavior, very little is known about the neuronal mechanisms that support this processing. In a step toward solving this problem, we demonstrate by theoretical analysis and simulation how networks of richly interconnected neurons, such as those observed in the superficial layers of the neocortex, can embed reliable, robust finite state machines. We show how a multistable neuronal network containing a number of states can be created very simply by coupling two recurrent networks whose synaptic weights have been configured for soft winnertakeall (sWTA) performance. These two sWTAs have simple, homogeneous, locally recurrent connectivity except for a small fraction of recurrent crossconnections between them, which are used to embed the required states. This coupling between the maps allows the network to continue to express the current state even after the input that elicited that state is withdrawn. In addition, a small number of transition neurons implement the necessary inputdriven transitions between the embedded states. We provide simple rules to systematically design and construct neuronal state machines of this kind. The significance of our finding is that it offers a method whereby the cortex could construct networks supporting a broad range of sophisticated processing by applying only small specializations to the same generic neuronal circuit. Uncorrected Proof 1
Dynamic Learning Machine Using Unrestricted Grammar
"... Abstract — This paper shows a Dynamic Network Learning for performing unrestricted grammar trained Recurrent Neural Network and proves the relationship between Recurrent Neural Network and Turing machine. Dynamic Network Learning employs bidirectional neural network which has feedback path from the ..."
Abstract
 Add to MetaCart
Abstract — This paper shows a Dynamic Network Learning for performing unrestricted grammar trained Recurrent Neural Network and proves the relationship between Recurrent Neural Network and Turing machine. Dynamic Network Learning employs bidirectional neural network which has feedback path from their outputs to the inputs, the response of such network is dynamic. Turing machine is a finitestate machine associated with an external storage. It prevents indefinite lengthy training sessions. The Dynamic Network Learning architecture is a Recurrent Neural Network and its working principle is related to Turing machine structure. This work exhibits how Turing machine recognizes recursive language and it is also elucidated that Dynamic Network Learning is a stable network.