## Natural Language Grammatical Inference: A Comparison of Recurrent Neural Networks and Machine Learning Methods (1996)

### Cached

### Download Links

- [www.neci.nj.nec.com]
- [www.neci.nj.nec.com]
- [clgiles.ist.psu.edu]
- [clgiles.ist.psu.edu]
- DBLP

### Other Repositories/Bibliography

Venue: | Symbolic, Connectionist, and Statistical Approaches to Learning for Natural Language Processing, Lecture notes in AI |

Citations: | 13 - 2 self |

### BibTeX

@INPROCEEDINGS{Lawrence96naturallanguage,

author = {Steve Lawrence and Sandiway Fong and C. Lee Giles},

title = {Natural Language Grammatical Inference: A Comparison of Recurrent Neural Networks and Machine Learning Methods},

booktitle = {Symbolic, Connectionist, and Statistical Approaches to Learning for Natural Language Processing, Lecture notes in AI},

year = {1996},

pages = {33--47},

publisher = {Springer-Verlag}

}

### OpenURL

### Abstract

We consider the task of training a neural network to classify natural language sentences as grammatical or ungrammatical, thereby exhibiting the same kind of discriminatory power provided by the Principles and Parameters linguistic framework, or Government and Binding theory. We investigate the following models: feed-forward neural networks, Frasconi-Gori-Soda and Back-Tsoi locally recurrent neural networks, Williams and Zipser and Elman recurrent neural networks, Euclidean and edit-distance nearest-neighbors, and decision trees. Non-neural network machine learning methods are included primarily for comparison. We find that the Elman and Williams & Zipser recurrent neural networks are able to find a representation for the grammar which we believe is more parsimonious. These models exhibit the best performance. 1 Motivation 1.1 Representational Power of Recurrent Neural Networks Natural language has traditionally been handled using symbolic computation and recursive processes. The most ...