• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

DMCA

The induction of dynamical recognizers (1991)

Cached

  • Download as a PDF

Download Links

  • [wexler.free.fr]
  • [www.demo.cs.brandeis.edu]
  • [demo.cs.brandeis.edu]
  • [nlp.cs.swarthmore.edu]
  • [www.dlsi.ua.es]
  • [nlp.cs.swarthmore.edu]
  • [wexler.free.fr]
  • [wexler.free.fr]
  • [ftp.cs.indiana.edu]

  • Other Repositories/Bibliography

  • DBLP
  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Jordan B. Pollack
Venue:Machine Learning
Citations:225 - 14 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@INPROCEEDINGS{Pollack91theinduction,
    author = {Jordan B. Pollack},
    title = {The induction of dynamical recognizers},
    booktitle = {Machine Learning},
    year = {1991},
    pages = {227}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

A higher order recurrent neural network architecture learns to recognize and generate languages after being "trained " on categorized exemplars. Studying these networks from the perspective of dynamical systems yields two interesting discoveries: First, a longitudinal examination of the learning process illustrates a new form of mechanical inference: Induction by phase transition. A small weight adjustment causes a "bifurcation" in the limit behavior of the network. This phase transition corresponds to the onset of the network’s capacity for generalizing to arbitrary-length strings. Second, a study of the automata resulting from the acquisition of previously published training sets indicates that while the architecture is not guaranteed to find a minimal finite automaton consistent with the given exemplars, which is an NP-Hard problem, the architecture does appear capable of generating non-regular languages by exploiting fractal and chaotic dynamics. I end the paper with a hypothesis relating linguistic generative capacity to the behavioral regimes of non-linear dynamical systems.

Keyphrases

dynamical recognizers    phase transition    learning process    small weight adjustment    limit behavior    linguistic generative capacity    new form    np-hard problem    chaotic dynamic    non-linear dynamical system    interesting discovery    dynamical system yield    order recurrent neural network architecture    non-regular language    minimal finite automaton consistent    mechanical inference    arbitrary-length string    behavioral regime    longitudinal examination    network capacity    categorized exemplar   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University