Quick Training of Probabilistic Neural Nets by Importance Sampling (2003)

by Yoshua Bengio , Jean-Sébastien Senécal
Citations:12 - 6 self

Active Bibliography

152 A Neural Probabilistic Language Model – Yoshua Bengio, Réjean Ducharme, Pascal Vincent, Christian Jauvin - 2003
726 A Study of Smoothing Methods for Language Models Applied to Ad Hoc Information Retrieval – Chengxiang Zhai, John Lafferty
874 An Empirical Study of Smoothing Techniques for Language Modeling – Stanley F. Chen - 1998
706 Accurate Unlexicalized Parsing – Dan Klein, Christopher D. Manning - 2003
834 A Maximum-Entropy-Inspired Parser – Eugene Charniak - 1999
857 Learning logical definitions from relations – J. R. Quinlan - 1990
1095 A Maximum Entropy approach to Natural Language Processing – Adam L. Berger, Stephen A. Della Pietra , Vincent J. Della Pietra - 1996
594 A statistical approach to machine translation – Peter F. Brown, John Cocke, Stephen A. Della Pietra, Vincent J. Della Pietra, Fredrick Jelinek, John D. Lafferty, Robert L. Mercer, Paul S. Roossin - 1990
715 Class-Based n-gram Models of Natural Language – Peter F. Brown, Peter V. deSouza, Robert L. Mercer, Vincent J. Della Pietra, Jenifer C. Lai - 1992