Results 1 
5 of
5
Language Modeling for Efficient BeamSearch
 Computer Speech and Language
, 1995
"... This paper considers the problems of estimating bigram language models and of efficiently representing them by a finite state network, which can be employed by an hidden Markov model based, beamsearch, continuous speech recognizer. ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
This paper considers the problems of estimating bigram language models and of efficiently representing them by a finite state network, which can be employed by an hidden Markov model based, beamsearch, continuous speech recognizer.
An Optimum Classifier Approximation for NetworkBased Handwritten Character Recognition
"... An approximation of the Bayes decision rule and its implementation on a twolayered network are described. The net is trained in two phases: first, probabilities of the discretevalued input features are learnt by applying a GoodTuring based estimator; second, net weights are estimated by applying ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
An approximation of the Bayes decision rule and its implementation on a twolayered network are described. The net is trained in two phases: first, probabilities of the discretevalued input features are learnt by applying a GoodTuring based estimator; second, net weights are estimated by applying an adaptive gradient descent technique. Experiments were performed on a database of 67,000 real life handwritten numerals. By using input units that read subpatterns of the character bitmap, a recognition rate of 93.30% is achieved, with 1.39% substitution rate. The paper shows that computational complexity and implementation characteristics make this approach a possible competitor of artificial neural networks described in the literature. 1 Introduction Classification is the problem of mapping a set of patterns into a fixed number of classes. With a statistical approach, classification is usually based on an a posteriori probability. Moreover, many nonstatistical classifiers can be seen...
An Optimum Classi¯er Approximation for NetworkBased Handwritten Character Recognition
, 1991
"... An approximation of the Bayes decision rule and its implementation on a twolayered network are described. The net is trained in two phases: ¯rst, probabilities of the discretevalued input features are learnt by applying a GoodTuring based estimator; second, net weights are estimated by applying ..."
Abstract
 Add to MetaCart
(Show Context)
An approximation of the Bayes decision rule and its implementation on a twolayered network are described. The net is trained in two phases: ¯rst, probabilities of the discretevalued input features are learnt by applying a GoodTuring based estimator; second, net weights are estimated by applying an adaptive gradient descent technique. Experiments were performed on a database of 67,000 real life handwritten numerals. By using input units that read subpatterns of the character bitmap, a recognition rate of 93.30 % is achieved, with 1.39 % substitution rate. The paper shows that computational complexity and implementation characteristics make this approach a possible competitor of arti¯cial neural networks described in the literature. 1
• Toolkits and ARPA file format
"... Goal: find the words w ∗ in a speech signal x such that: w ∗ = argmax w Pr(x  w) Pr(w) (1) Problems: • language modeling (LM): estimating Pr(w) • acoustic modeling (AM): estimating Pr(x  w) • search problem: computing (1) AM sums over hidden state sequences s a Markov process of (x, s) from w Pr( ..."
Abstract
 Add to MetaCart
Goal: find the words w ∗ in a speech signal x such that: w ∗ = argmax w Pr(x  w) Pr(w) (1) Problems: • language modeling (LM): estimating Pr(w) • acoustic modeling (AM): estimating Pr(x  w) • search problem: computing (1) AM sums over hidden state sequences s a Markov process of (x, s) from w Pr(x  w) = s Pr(x, s  w) Hidden Markov Model: hidden states ”link ” speech frames to words.