Results 1  10
of
82
The Power of Amnesia: Learning Probabilistic Automata with Variable Memory Length
 Machine Learning
, 1996
"... . We propose and analyze a distribution learning algorithm for variable memory length Markov processes. These processes can be described by a subclass of probabilistic finite automata which we name Probabilistic Suffix Automata (PSA). Though hardness results are known for learning distributions gene ..."
Abstract

Cited by 172 (16 self)
 Add to MetaCart
. We propose and analyze a distribution learning algorithm for variable memory length Markov processes. These processes can be described by a subclass of probabilistic finite automata which we name Probabilistic Suffix Automata (PSA). Though hardness results are known for learning distributions generated by general probabilistic automata, we prove that the algorithm we present can efficiently learn distributions generated by PSAs. In particular, we show that for any target PSA, the KLdivergence between the distribution generated by the target and the distribution generated by the hypothesis the learning algorithm outputs, can be made small with high confidence in polynomial time and sample complexity. The learning algorithm is motivated by applications in humanmachine interaction. Here we present two applications of the algorithm. In the first one we apply the algorithm in order to construct a model of the English language, and use this model to correct corrupted text. In the second ...
Hidden Markov processes
 IEEE Trans. Inform. Theory
, 2002
"... Abstract—An overview of statistical and informationtheoretic aspects of hidden Markov processes (HMPs) is presented. An HMP is a discretetime finitestate homogeneous Markov chain observed through a discretetime memoryless invariant channel. In recent years, the work of Baum and Petrie on finite ..."
Abstract

Cited by 172 (3 self)
 Add to MetaCart
Abstract—An overview of statistical and informationtheoretic aspects of hidden Markov processes (HMPs) is presented. An HMP is a discretetime finitestate homogeneous Markov chain observed through a discretetime memoryless invariant channel. In recent years, the work of Baum and Petrie on finitestate finitealphabet HMPs was expanded to HMPs with finite as well as continuous state spaces and a general alphabet. In particular, statistical properties and ergodic theorems for relative entropy densities of HMPs were developed. Consistency and asymptotic normality of the maximumlikelihood (ML) parameter estimator were proved under some mild conditions. Similar results were established for switching autoregressive processes. These processes generalize HMPs. New algorithms were developed for estimating the state, parameter, and order of an HMP, for universal coding and classification of HMPs, and for universal decoding of hidden Markov channels. These and other related topics are reviewed in this paper. Index Terms—Baum–Petrie algorithm, entropy ergodic theorems, finitestate channels, hidden Markov models, identifiability, Kalman filter, maximumlikelihood (ML) estimation, order estimation, recursive parameter estimation, switching autoregressive processes, Ziv inequality. I.
Fast Decoding and Optimal Decoding for Machine Translation
 In Proceedings of ACL 39
, 2001
"... A good decoding algorithm is critical ..."
PartofSpeech Tagging and Partial Parsing
 CorpusBased Methods in Language and Speech
, 1996
"... m we can carve o# next. `Partial parsing' is a cover term for a range of di#erent techniques for recovering some but not all of the information contained in a traditional syntactic analysis. Partial parsing techniques, like tagging techniques, aim for reliability and robustness in the face of the va ..."
Abstract

Cited by 98 (0 self)
 Add to MetaCart
m we can carve o# next. `Partial parsing' is a cover term for a range of di#erent techniques for recovering some but not all of the information contained in a traditional syntactic analysis. Partial parsing techniques, like tagging techniques, aim for reliability and robustness in the face of the vagaries of natural text, by sacrificing completeness of analysis and accepting a low but nonzero error rate. 1 Tagging The earliest taggers [35, 51] had large sets of handconstructed rules for assigning tags on the basis of words' character patterns and on the basis of the tags assigned to preceding or following words, but they had only small lexica, primarily for exceptions to the rules. TAGGIT [35] was used to generate an initial tagging of the Brown corpus, which was then handedited. (Thus it provided the data that has since been used to train other taggers [20].) The tagger described by Garside [56, 34], CLAWS, was a probabilistic version of TAGGIT, and the DeRose tagger improved on
Connectionist Probability Estimation in HMM Speech Recognition
 IEEE Transactions on Speech and Audio Processing
, 1992
"... This report is concerned with integrating connectionist networks into a hidden Markov model (HMM) speech recognition system, This is achieved through a statistical understanding of connectionist networks as probability estimators, first elucidated by Herve Bourlard. We review the basis of HMM speech ..."
Abstract

Cited by 62 (16 self)
 Add to MetaCart
This report is concerned with integrating connectionist networks into a hidden Markov model (HMM) speech recognition system, This is achieved through a statistical understanding of connectionist networks as probability estimators, first elucidated by Herve Bourlard. We review the basis of HMM speech recognition, and point out the possible benefits of incorporating connectionist networks. We discuss some issues necessary to the construction of a connectionist HMM recognition system, and describe the performance of such a system, including evaluations on the DARPA database, in collaboration with Mike Cohen and Horacio Franco of SRI International. In conclusion, we show that a connectionist component improves a state of the art HMM system. ii Part I INTRODUCTION Over the past few years, connectionist models have been widely proposed as a potentially powerful approach to speech recognition (e.g. Makino et al. (1983), Huang et al. (1988) and Waibel et al. (1989)). However, whilst connec...
The Candide system for machine translation
 In Proceedings of the ARPA Conference on Human Language Technology
, 1994
"... We present an overview of Candide, a system for automatic translation of French text to English text. Candide uses methods of information theory and statistics to develop a probability model of the translation process. This model, which is made to accord as closely as possible with a large body of F ..."
Abstract

Cited by 51 (5 self)
 Add to MetaCart
We present an overview of Candide, a system for automatic translation of French text to English text. Candide uses methods of information theory and statistics to develop a probability model of the translation process. This model, which is made to accord as closely as possible with a large body of French and English sentence pairs, is then used to generate English translations of previously unseen French sentences. This paper provides a tutorial in these methods, discussions of the training and operation of the system, and a summary of test results. 1.
Survey of the state of the art in human language technology
 Studies In Natural Language Processing, XIIXIII
, 1997
"... Sponsors: ..."
A unified framework for tree search decoding: rediscovering the sequential decoder
 IEEE Trans. Inform. Theory
, 2006
"... Abstract—We consider receiver design for coded transmission over linear Gaussian channels. We restrict ourselves to the class of lattice codes and formulate the joint detection and decoding problem as a closest lattice point search (CLPS). Here, a tree search framework for solving the CLPS is adopte ..."
Abstract

Cited by 50 (2 self)
 Add to MetaCart
Abstract—We consider receiver design for coded transmission over linear Gaussian channels. We restrict ourselves to the class of lattice codes and formulate the joint detection and decoding problem as a closest lattice point search (CLPS). Here, a tree search framework for solving the CLPS is adopted. In our framework, the CLPS algorithm is decomposed into the preprocessing and tree search stages. The role of the preprocessing stage is to expose the tree structure in a form matched to the search stage. We argue that the forward and feedback (matrix) filters of the minimum meansquare error decision feedback equalizer (MMSEDFE) are instrumental for solving the joint detection and decoding problem in a single search stage. It is further shown that MMSEDFE filtering allows for solving underdetermined linear systems and using lattice reduction methods to diminish complexity, at the expense of a marginal performance loss. For the search stage, we present a generic method, based on the branch and bound (BB) algorithm, and show that it encompasses all existing sphere decoders as special cases. The proposed generic algorithm further allows for an interesting classification of tree search decoders, sheds more light on the structural properties of all known sphere decoders, and inspires the design of more efficient decoders. In particular, an efficient decoding algorithm that resembles the wellknown Fano sequential decoder is identified. The excellent performance–complexity tradeoff achieved by the proposed MMSEDFE Fano decoder is established via simulation results and analytical arguments in several multipleinput multipleoutput (MIMO) and intersymbol interference (ISI) scenarios. Index Terms—Closest lattice point search (CLPS), Fano decoder, lattice codes, sequential decoding, sphere decoding, tree search. I.
An efficient A* stack decoder algorithm for continuous speech recognition with a stochastic language model
 In Proc. IEEE ICASSP’93
, 1993
"... The stack decoder is an attractive algorithm for controlling the acoustic and language model matching in a continuous speech recognizer. A previous paper described a nearoptimal admissible Viterbi A * search algorithm for use with noncrossword acoustic models and nogrammar language models [16]. T ..."
Abstract

Cited by 49 (0 self)
 Add to MetaCart
The stack decoder is an attractive algorithm for controlling the acoustic and language model matching in a continuous speech recognizer. A previous paper described a nearoptimal admissible Viterbi A * search algorithm for use with noncrossword acoustic models and nogrammar language models [16]. This paper extends this algorithm to include unigram language models and describes a modified version of the algorithm which includes the full (forward) decoder, crossword acoustic models and longerspan language models. The resultant algorithm is not admissible, but has been demonstrated to have a low probability of search error and to be very efficient.
A TreeTrellis Based Fast Search for Finding the N Best Sentence Hypotheses in Continuous Speech Recognition
"... In this paper a new, treetrellis based fast search for finding the N best sentence hypotheses in continuous speech recognition is proposed. The search consists of two parts: a forward, timesynchronous, trellis search and a backward, time asynchronous, tree search. In the first module the well know ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
In this paper a new, treetrellis based fast search for finding the N best sentence hypotheses in continuous speech recognition is proposed. The search consists of two parts: a forward, timesynchronous, trellis search and a backward, time asynchronous, tree search. In the first module the well known Viterbi algorithm is used for finding the best hypothesis and for preparing a map of all partial paths scores time synchronously. In the second module a tree search is used to grow partial paths backward and time asynchronously. Each partial path in the backward tree search is rank ordered in a stack by the corresponding full path score, which is computed by adding the partial path score with the best possible score of the remaining path obtained from the trellis path map. In each path growing cycle, the current best partial path, which is at the top of the stack, is extended by one arc (word). The new treetrellis search is different from the traditional time synchronous Viterbi search in its ability for finding not just the best but the Nbest paths of different word content. The new search is also different from the A * algorithm, or the stack algorithm, in its capability for providing an exact, full path score estimate of any given partial (i.e., incomplete) path before its completion. When compared with the best candidate Viterbi search, the search complexities for finding the Nbest strings are rather low, i.e., only a fraction more computation is needed.