• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

DMCA

A learning algorithm for Boltzmann machines (1985)

Cached

  • Download as a PDF

Download Links

  • [www.cs.toronto.edu]
  • [learning.cs.toronto.edu]
  • [www.cs.utoronto.ca]
  • [www.learning.cs.toronto.edu]
  • [www.cs.toronto.edu]
  • [www.learning.cs.toronto.edu]
  • [www.cs.toronto.edu]
  • [www.enterrasolutions.com]
  • [csjarchive.cogsci.rpi.edu]
  • [papers.cnl.salk.edu]
  • [www.cs.ubbcluj.ro]
  • [minds.jacobs-university.de]
  • [deeplearning.cs.cmu.edu]

  • Other Repositories/Bibliography

  • DBLP
  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by H. Ackley , E. Hinton , J. Sejnowski
Venue:Cognitive Science
Citations:584 - 13 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@ARTICLE{Ackley85alearning,
    author = {H. Ackley and E. Hinton and J. Sejnowski},
    title = {A learning algorithm for Boltzmann machines},
    journal = {Cognitive Science},
    year = {1985},
    pages = {147--169}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

The computotionol power of massively parallel networks of simple processing elements resides in the communication bandwidth provided by the hardware connections between elements. These connections con allow a significant fraction of the knowledge of the system to be applied to an instance of a problem in o very short time. One kind of computation for which massively porollel networks appear to be well suited is large constraint satisfaction searches, but to use the connections efficiently two conditions must be met: First, a search technique that is suitable for parallel networks must be found. Second, there must be some way of choosing internal representations which allow the preexisting hardware connections to be used efficiently for encoding the con-straints in the domain being searched. We describe a generol parallel search method, based on statistical mechanics, and we show how it leads to a gen-eral learning rule for modifying the connection strengths so as to incorporate knowledge obout o task domain in on efficient way. We describe some simple examples in which the learning algorithm creates internal representations thot ore demonstrobly the most efficient way of using the preexisting connectivity structure. 1.

Keyphrases

learning algorithm    boltzmann machine    internal representation    efficient way    parallel network    computotionol power    short time    communication bandwidth    statistical mechanic    hardware connection    preexisting hardware connection    large constraint satisfaction search    gen-eral learning rule    search technique    significant fraction    connection strength    preexisting connectivity structure    simple processing element    simple example    generol parallel search method    task domain   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University