Results 1 
1 of
1
Entropybased pruning of backoff language models
 In Proceedings of the DARPA Broadcast News Transcription and Understanding Workshop
"... A criterion for pruning parameters from Ngram backoff language models is developed, based on the relative entropy between the original and the pruned model. It is shown that the relative entropy resulting from pruning a single Ngram can be computed exactly and efficiently for backoff models. The r ..."
Abstract

Cited by 128 (7 self)
 Add to MetaCart
(Show Context)
A criterion for pruning parameters from Ngram backoff language models is developed, based on the relative entropy between the original and the pruned model. It is shown that the relative entropy resulting from pruning a single Ngram can be computed exactly and efficiently for backoff models. The relative entropy measure can be expressed as a relative change in training set perplexity. This leads to a simple pruning criterion whereby all Ngrams that change perplexity by less than a threshold are removed from the model. Experiments show that a productionquality Hub4 LM can be reduced to 26 % its original size without increasing recognition error. We also compare the approach to a heuristic pruning criterion by Seymore and Rosenfeld [9], and show that their approach can be interpreted as an approximation to the relative entropy criterion. Experimentally, both approaches select similar sets of Ngrams (about 85% overlap), with the exact relative entropy criterion giving marginally better performance. 1.