Results 1 
3 of
3
Study on interaction between entropy pruning and KneserNey smoothing
 in Proceedings of Interspeech
, 2010
"... The paper presents an indepth analysis of a less known interaction between KneserNey smoothing and entropy pruning that leads to severe degradation in language model performance under aggressive pruning regimes. Experiments in a datarich setup such as google.com voice search show a significant im ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
The paper presents an indepth analysis of a less known interaction between KneserNey smoothing and entropy pruning that leads to severe degradation in language model performance under aggressive pruning regimes. Experiments in a datarich setup such as google.com voice search show a significant impact in WER as well: pruning KneserNey and Katz models to 0.1 % of their original impacts speech recognition accuracy significantly, approx. 10 % relative. 1.
Recursive hashing and onepass, . . .
, 2007
"... Many applications use sequences of n consecutive symbols (ngrams). We review ngram hashing and prove that recursive hash families are pairwise independent at best. We prove that hashing by irreducible polynomials is pairwise independent whereas hashing by cyclic polynomials is quasipairwise indep ..."
Abstract
 Add to MetaCart
Many applications use sequences of n consecutive symbols (ngrams). We review ngram hashing and prove that recursive hash families are pairwise independent at best. We prove that hashing by irreducible polynomials is pairwise independent whereas hashing by cyclic polynomials is quasipairwise independent: we make it pairwise independent by discarding n − 1 bits. One application of hashing is to estimate the number of distinct ngrams, a viewsize estimation problem. While view sizes can be estimated by sampling under statistical assumptions, we desire a statistically unassuming algorithm with universally valid accuracy bounds. Most related work has focused on repeatedly hashing the data, which is prohibitive for large data sources. We prove that a onepass onehash algorithm is sufficient for accurate estimates if the hashing is sufficiently independent. For example, we can improve by a factor of 2 the theoretical bounds on estimation accuracy by replacing pairwise independent hashing by 4wise independent hashing. We show that recursive random hashing is sufficiently independent in practice. Maybe surprisingly, our experiments showed that hashing by cyclic polynomials, which is only quasipairwise independent, sometimes outperformed 10wise independent hashing while being twice as fast. For comparison, we measured the time to obtain exact ngram counts using suffix arrays and show that, while we used hardly any storage, we were an order of magnitude faster. The experiments used a large collection of English text from Project Gutenberg as well as synthetic data.
Abstract
, 2008
"... In multimedia, text or bioinformatics databases, applications query sequences of n consecutive symbols called ngrams. Estimating the number of distinct ngrams is a viewsize estimation problem. While view sizes can be estimated by sampling under statistical assumptions, we desire an unassuming alg ..."
Abstract
 Add to MetaCart
In multimedia, text or bioinformatics databases, applications query sequences of n consecutive symbols called ngrams. Estimating the number of distinct ngrams is a viewsize estimation problem. While view sizes can be estimated by sampling under statistical assumptions, we desire an unassuming algorithm with universally valid accuracy bounds. Most related work has focused on repeatedly hashing the data, which is prohibitive for large data sources. We prove that a onepass onehash algorithm is sufficient for accurate estimates if the hashing is sufficiently independent. To reduce costs further, we investigate recursive random hashing algorithms and show that they are sufficiently independent in practice. We compare our running times with exact counts using suffix arrays and show that, while we use hardly any storage, we are an order of magnitude faster. The approach further is extended to a onepass/onehash computation of ngram entropy and iceberg counts. The experiments use a large collection of English text from the Gutenberg Project as well as synthetic data. 1