Exploiting Syntactic Structure for Natural Language Modeling
SVM HeaderParse 0.1
Johns Hopkins University
The thesis presents an attempt at using the syntactic structure in natural language for improved language models for speech recognition. The structured language model merges techniques in automatic parsing and language modeling using an original probabilistic parameterization of a shift-reduce parser. A maximum likelihood reestimation procedure belonging to the class of expectation-maximization algorithms is employed for training the model. Experiments on the Wall Street Journal, Switchboard and Broadcast News corpora show improvement in both perplexity and word error rate -- word lattice rescoring -- over the standard 3-gram language model. The significance of the thesis lies in presenting an original approach to language modeling that uses the hierarchical -- syntactic -- structure in natural language to improve on current 3-gram modeling techniques for large vocabulary speech recognition.