Quick Training of Probabilistic Neural Nets by Importance Sampling (2003)

by Yoshua Bengio , Jean-Sébastien Senécal
Citations:17 - 7 self

Active Bibliography

362 A Neural Probabilistic Language Model – Yoshua Bengio, Réjean Ducharme, Pascal Vincent, Christian Jauvin - 2003
1123 An Empirical Study of Smoothing Techniques for Language Modeling – Stanley F. Chen - 1998
890 A Study of Smoothing Methods for Language Models Applied to Ad Hoc Information Retrieval – Chengxiang Zhai, John Lafferty
963 Accurate Unlexicalized Parsing – Dan Klein, Christopher D. Manning - 2003
586 Generating typed dependency parses from phrase structure parses – Marie-Catherine de Marneffe, Bill MacCartney, Christopher D. Manning - 2006
939 A Maximum-Entropy-Inspired Parser – Eugene Charniak - 1999
578 Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory – James L. McClelland, Bruce L. McNaughton, Randall C. O'Reilly - 1995
514 Parallel Networks that Learn to Pronounce English Text – Terrence J. Sejnowski, Charles R. Rosenberg - 1987
597 A distributed, developmental model of word recognition and naming – Mark S. Seidenberg, James L. Mcclelland - 1989