Hierarchical probabilistic neural network language model (2005)

by Frederic Morin , Yoshua Bengio
Venue:AISTATS’05
Citations:33 - 2 self

Documents Related by Co-Citation

145 A Neural Probabilistic Language Model – Yoshua Bengio, Réjean Ducharme, Pascal Vincent, Christian Jauvin - 2003
44 A scalable hierarchical distributed language model – Andriy Mnih, Geoffrey Hinton - 2008
113 A unified architecture for natural language processing: Deep neural networks with multitask learning – Ronan Collobert, Jason Weston - 2008
43 Three New Graphical Models for Statistical Language Modelling – Andriy Mnih, Geoffrey Hinton
30 Recurrent neural network based language model – Tomas Mikolov, Martin Karafiat, Jan Cernocky, Sanjeev Khudanpur - 2010
850 An Empirical Study of Smoothing Techniques for Language Modeling – Stanley F. Chen - 1998
33 Continuous space language models – Holger Schwenk - 2007
11 Quick Training of Probabilistic Neural Nets by Importance Sampling – Yoshua Bengio, Jean-Sébastien Senécal - 2003
11 Training neural network language models on very large corpora – Holger Schwenk, Jean-Luc Gauvain - 2005
55 Word representations: A simple and general method for semisupervised learning – Joseph Turian, Département D’informatique Et, Recherche Opérationnelle (diro, Université De Montréal, Lev Ratinov, Yoshua Bengio - 2010
698 Class-Based n-gram Models of Natural Language – Peter F. Brown, Peter V. deSouza, Robert L. Mercer, Vincent J. Della Pietra, Jenifer C. Lai - 1992
2350 Latent dirichlet allocation – David M. Blei, Andrew Y. Ng, Michael I. Jordan, John Lafferty - 2003
2703 Indexing by latent semantic analysis – Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, Richard Harshman - 1990
509 Training Products of Experts by Minimizing Contrastive Divergence – Geoffrey Hinton - 2000
755 SRILM -- An extensible language modeling toolkit – Andreas Stolcke - 2002
29 Classes for Fast Maximum Entropy Training – Joshua Goodman
11 Distributed Latent Variable Models of Lexical Co-occurrences – John Blitzer, Amir Globerson, Fernando Pereira - 2005
5 Adaptive Importance Sampling to Accelerate Training of a Neural Probabilistic Language Model – Jean-sébastien Senécal, Jean-sébastien Senécal, Yoshua Bengio - 2003
11 Structured output layer neural network language model – Hai-Son Le, I Oparin, A Allauzen, J-L Gauvain, F Yvon - 2011