• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

DMCA

A Fast Learning Algorithm for Deep Belief Nets (2006)

Cached

  • Download as a PDF

Download Links

  • [www.cs.berkeley.edu]
  • [www.gatsby.ucl.ac.uk]
  • [www.gatsby.ucl.ac.uk]
  • [www.eecs.berkeley.edu]
  • [www.cs.berkeley.edu]
  • [www.cs.toronto.edu]
  • [www.cs.utoronto.ca]
  • [learning.cs.toronto.edu]
  • [www.cs.toronto.edu]
  • [www.gatsby.ucl.ac.uk]
  • [www.gatsby.ucl.ac.uk]
  • [mlrg.cs.brown.edu]
  • [www.cse.msu.edu]
  • [www.learning.cs.toronto.edu]
  • [www.cs.columbia.edu]
  • [www.seas.harvard.edu]
  • [www.cs.utoronto.ca]
  • [www.cs.toronto.edu]
  • [www.cs.toronto.edu]
  • [www.cs.columbia.edu]
  • [www.cs.utoronto.ca]
  • [www.learning.cs.toronto.edu]
  • [www.cs.utoronto.ca]
  • [www.cs.toronto.edu]
  • [www.cs.utoronto.ca]
  • [www.cs.toronto.edu]
  • [learning.cs.toronto.edu]
  • [web.cs.swarthmore.edu]
  • [www.cs.swarthmore.edu]
  • [www.stat.ucla.edu]
  • [www.stat.ucla.edu]
  • [www.cs.toronto.edu]
  • [www.stat.ucla.edu]
  • [www.stat.ucla.edu]

  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Geoffrey E. Hinton , Simon Osindero , Yee-whye Teh
Citations:969 - 49 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@MISC{Hinton06afast,
    author = {Geoffrey E. Hinton and Simon Osindero and Yee-whye Teh},
    title = { A Fast Learning Algorithm for Deep Belief Nets},
    year = {2006}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

We show how to use “complementary priors” to eliminate the explainingaway effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.

Keyphrases

fast learning algorithm    deep belief net    greedy algorithm    complementary prior    wake-sleep algorithm    digit classification    contrastive version    learning procedure    directed belief network    free-energy landscape    belief net    long ravine    many hidden layer    top-level associative memory    associative memory    discriminative learning algorithm    generative model    explainingaway effect    good generative model    low-dimensional manifold    handwritten digit image    undirected associative memory    directed connection    hidden layer    joint distribution   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University