## The induction of dynamical recognizers (1991)

### Cached

### Download Links

- [wexler.free.fr]
- [www.demo.cs.brandeis.edu]
- [demo.cs.brandeis.edu]
- [nlp.cs.swarthmore.edu]
- [wexler.free.fr]
- [wexler.free.fr]
- [ftp.cs.indiana.edu]
- DBLP

### Other Repositories/Bibliography

Venue: | Machine Learning |

Citations: | 214 - 16 self |

### BibTeX

@INPROCEEDINGS{Pollack91theinduction,

author = {Jordan B. Pollack},

title = {The induction of dynamical recognizers},

booktitle = {Machine Learning},

year = {1991},

pages = {227}

}

### Years of Citing Articles

### OpenURL

### Abstract

A higher order recurrent neural network architecture learns to recognize and generate languages after being "trained " on categorized exemplars. Studying these networks from the perspective of dynamical systems yields two interesting discoveries: First, a longitudinal examination of the learning process illustrates a new form of mechanical inference: Induction by phase transition. A small weight adjustment causes a "bifurcation" in the limit behavior of the network. This phase transition corresponds to the onset of the network’s capacity for generalizing to arbitrary-length strings. Second, a study of the automata resulting from the acquisition of previously published training sets indicates that while the architecture is not guaranteed to find a minimal finite automaton consistent with the given exemplars, which is an NP-Hard problem, the architecture does appear capable of generating non-regular languages by exploiting fractal and chaotic dynamics. I end the paper with a hypothesis relating linguistic generative capacity to the behavioral regimes of non-linear dynamical systems.