• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

DMCA

A fast learning algorithm for deep belief nets (2006)

Cached

  • Download as a PDF

Download Links

  • [www.cs.berkeley.edu]
  • [www.gatsby.ucl.ac.uk]
  • [www.gatsby.ucl.ac.uk]
  • [www.eecs.berkeley.edu]
  • [www.cs.berkeley.edu]
  • [www.cs.toronto.edu]
  • [www.cs.utoronto.ca]
  • [learning.cs.toronto.edu]
  • [www.cs.toronto.edu]
  • [www.gatsby.ucl.ac.uk]
  • [www.gatsby.ucl.ac.uk]
  • [mlrg.cs.brown.edu]
  • [www.cse.msu.edu]
  • [www.learning.cs.toronto.edu]
  • [www.cs.columbia.edu]
  • [www.seas.harvard.edu]
  • [www.cs.utoronto.ca]
  • [www.cs.toronto.edu]
  • [www.cs.toronto.edu]
  • [www.cs.columbia.edu]
  • [www.cs.utoronto.ca]
  • [www.learning.cs.toronto.edu]
  • [www.cs.utoronto.ca]
  • [www.cs.toronto.edu]
  • [www.cs.utoronto.ca]
  • [www.cs.toronto.edu]
  • [learning.cs.toronto.edu]
  • [web.cs.swarthmore.edu]
  • [www.cs.swarthmore.edu]
  • [www.stat.ucla.edu]
  • [www.stat.ucla.edu]
  • [www.cs.toronto.edu]
  • [www.stat.ucla.edu]
  • [www.stat.ucla.edu]

  • Other Repositories/Bibliography

  • CiteULike
  • DBLP
  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Geoffrey E. Hinton , Simon Osindero
Venue:Neural Computation
Citations:970 - 49 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@ARTICLE{Hinton06afast,
    author = {Geoffrey E. Hinton and Simon Osindero},
    title = {A fast learning algorithm for deep belief nets},
    journal = {Neural Computation},
    year = {2006},
    volume = {18},
    pages = {2006}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

We show how to use “complementary priors ” to eliminate the explaining away effects that make inference difficult in densely-connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modelled by long ravines in the free-energy landscape of the top-level associative memory and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind. 1

Keyphrases

deep belief net    fast learning algorithm    greedy algorithm    complementary prior    associative memory    discriminative learning algorithm    generative model    good generative model    low-dimensional manifold    handwritten digit image    undirected associative memory    directed connection    hidden layer    joint distribution    wake-sleep algorithm    densely-connected belief net    digit classification    contrastive version    learning procedure    directed belief network    free-energy landscape    long ravine    many hidden layer    top-level associative memory   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University