Greedy layer-wise training of deep networks (2007)

by Yoshua Bengio , Pascal Lamblin , Dan Popovici , Hugo Larochelle , Université De Montréal , Montréal Québec
Venue:In NIPS
Citations:186 - 31 self

Active Bibliography

1 Representational Power of Restricted Boltzmann Machines and Deep Belief Networks
21 Representational power of restricted boltzmann machines and deep belief networks – - 2007
42 Exploring strategies for training deep neural networks
2 Shallow vs. Deep Sum-Product Networks
77 Context-Dependent Pre-trained Deep Neural Networks for Large Vocabulary Speech Recognition – - 2012
9 Representation Learning: A Review and New Perspectives – - 2012
The Utility of Knowledge Transfer with Noisy Training Sets
Visual Object Recognition Using Generative Models of Images – - 2010
3 On the expressive power of deep architectures – - 2011
Extraction hiérarchique de caractéristiques pour l’apprentissage à partir de données complexes en haute dimension – - 2008
An Introduction to Deep Learning
2 Large margin classification in infinite neural networks
iv TABLE OF CONTENTS Signature Page.................................. Dedication..................................... Table of Contents................................. – - 2012
20 Stacks of Convolutional Restricted Boltzmann Machines for Shift-Invariant Feature Learning
Journal of Logic and Computation Advance Access published October 26, 2009 Relevance Realization and the Emerging
Learning Long-Range Vision for an Offroad Robot – - 2008
NOTE Communicated by Yoshua Bengio Deep, Narrow Sigmoid Belief Networks Are Universal
On Herding in Deep Networks – - 2010
16 Dimensionality Reduction: A Comparative Review – - 2008