## Recurrent Neural Networks Learn Deterministic Representations of Fuzzy Finite-State Automata (1998)

### BibTeX

@MISC{Omlin98recurrentneural,

author = {Christian W. Omlin and C. Lee Giles},

title = {Recurrent Neural Networks Learn Deterministic Representations of Fuzzy Finite-State Automata},

year = {1998}

}

### OpenURL

### Abstract

The paradigm of deterministic finite-state automata (DFAs) and their corresponding regular languages have been shown to be very useful for addressing fundamental issues in recurrent neural networks. The issues that have been addressed include knowledge representation, extraction, and refinement as well development of advanced learning algorithms. Recurrent neural networks are also very promising tool for modeling discrete dynamical systems through learning, particularly when partial prior knowledge is available. The drawback of the DFA paradigm is that it is inappropriate for modeling vague or uncertain dynamics; however, many real-world applications deal with vague or uncertain information. One way to model vague information in a dynamical system is to allow for vague state transitions, i.e. the system may be in several states at the same time with varying degree of certainty; fuzzy finite-state automata (FFAs) are a formal equivalent of such systems. It is therefore of interest to study how uncertainty in the form of FFAs can be modeled by deterministic recurrent neural networks. We have previously proven that second-order recurrent neural networks are able to represent FFAs, i.e. recurrent networks can be constructed that assign fuzzy memberships to input strings with arbitrary accuracy. In such networks, the classification performance is independent of the string length. In this paper, we are concerned with recurrent neural networks that have been trained to behave like FFAs.In particular, we are interested in the internal representation of fuzzy states and state transitions and in the extraction of knowledge in symbolic form.