## Stable Encoding of Finite-State Machines in Discrete-Time Recurrent Neural Nets with Sigmoid Units (1998)

### Cached

### Download Links

- [altea.dlsi.ua.es]
- [www.dlsi.ua.es]
- [www.dlsi.ua.es]
- [www.dlsi.ua.es]
- [altea.dlsi.ua.es]
- DBLP

### Other Repositories/Bibliography

Citations: | 14 - 3 self |

### BibTeX

@MISC{Carrasco98stableencoding,

author = {Rafael C. Carrasco and Mikel L. Forcada and M. Angeles Valdes-Munoz and Ramon P. Neco},

title = {Stable Encoding of Finite-State Machines in Discrete-Time Recurrent Neural Nets with Sigmoid Units},

year = {1998}

}

### OpenURL

### Abstract

In recent years, there has been a lot of interest in the use of discrete-time recurrent neural nets (DTRNN) to learn finite-state tasks, with interesting results regarding the induction of simple finite-state machines from input-output strings. Parallel work has studied the computational power of DTRNN in connection with finite-state computation. This paper describes a simple strategy to devise stable encodings of finite-state machines in computationally capable discrete-time recurrent neural architectures with sigmoid units, and gives a detailed presentation on how this strategy may be applied to encode a general class of finite-state machines in a variety of commonly-used first- and second-order recurrent neural networks. Unlike previous work that either imposed some restrictions to state values, or used a detailed analysis based on fixed-point attractors, the present approach applies to any positive, bounded, strictly growing, continuous activation function, and uses simple bounding criteri...