Results 1  10
of
33
Markovian Models for Sequential Data
, 1996
"... Hidden Markov Models (HMMs) are statistical models of sequential data that have been used successfully in many machine learning applications, especially for speech recognition. Furthermore, in the last few years, many new and promising probabilistic models related to HMMs have been proposed. We firs ..."
Abstract

Cited by 117 (2 self)
 Add to MetaCart
(Show Context)
Hidden Markov Models (HMMs) are statistical models of sequential data that have been used successfully in many machine learning applications, especially for speech recognition. Furthermore, in the last few years, many new and promising probabilistic models related to HMMs have been proposed. We first summarize the basics of HMMs, and then review several recent related learning algorithms and extensions of HMMs, including in particular hybrids of HMMs with artificial neural networks, InputOutput HMMs (which are conditional HMMs using neural networks to compute probabilities), weighted transducers, variablelength Markov models and Markov switching statespace models. Finally, we discuss some of the challenges of future research in this very active area. 1 Introduction Hidden Markov Models (HMMs) are statistical models of sequential data that have been used successfully in many applications in artificial intelligence, pattern recognition, speech recognition, and modeling of biological ...
Survey of the state of the art in human language technology
 Studies In Natural Language Processing, XIIXIII
, 1997
"... Sponsors: ..."
Accelerated Dp Based Search For Statistical Translation
 In European Conf. on Speech Communication and Technology
, 1997
"... In this paper, we describe a fast search algorithm for statistical translation based on dynamic programming (DP) and present experimental results. The approach is based on the assumption that the word alignment is monotone with respect to the word order in both languages. To reduce the search effort ..."
Abstract

Cited by 64 (11 self)
 Add to MetaCart
(Show Context)
In this paper, we describe a fast search algorithm for statistical translation based on dynamic programming (DP) and present experimental results. The approach is based on the assumption that the word alignment is monotone with respect to the word order in both languages. To reduce the search effort for this approach, we introduce two methods: an acceleration technique to efficiently compute the dynamic programming recursion equation and a beam search strategy as used in speech recognition. The experimental tests carried out on the Verbmobil corpus showed that the search space, measured by the number of translation hypotheses, is reduced by a factor of about 230 without affecting the translation performance.
Word reordering and a dynamic programming beam search algorithm for statistical machine translation
 Computational Linguistics
, 2003
"... In this article, we describe an efficient beam search algorithm for statistical machine translation based on dynamic programming (DP). The search algorithm uses the translation model presented in Brown et al. (1993). Starting from a DPbased solution to the travelingsalesman problem, we present a n ..."
Abstract

Cited by 40 (5 self)
 Add to MetaCart
(Show Context)
In this article, we describe an efficient beam search algorithm for statistical machine translation based on dynamic programming (DP). The search algorithm uses the translation model presented in Brown et al. (1993). Starting from a DPbased solution to the travelingsalesman problem, we present a novel technique to restrict the possible word reorderings between source and target language in order to achieve an efficient search algorithm. Word reordering restrictions especially useful for the translation direction German to English are presented. The restrictions are generalized, and a set of four parameters to control the word reordering is introduced, which then can easily be adopted to new translation directions. The beam search procedure has been successfully tested on the Verbmobil task (German to English, 8,000word vocabulary) and on the Canadian Hansards task (French to English, 100,000word vocabulary). For the mediumsized Verbmobil task, a sentence can be translated in a few seconds, only a small number of search errors occur, and there is no performance degradation as measured by the word error criterion used in this article. 1.
A monotonic and continuous twodimensional warping based on dynamic programming
 Proc. 14th ICPR
, 1998
"... A novel twodimensional warping algorithm is presented which searches for the optimal pixel mapping subject to continuity and monotonicity constraints. These constraints enable us to preserve topological structure in images. The search algorithm is based on dynamic programming (DP). As implementatio ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
(Show Context)
A novel twodimensional warping algorithm is presented which searches for the optimal pixel mapping subject to continuity and monotonicity constraints. These constraints enable us to preserve topological structure in images. The search algorithm is based on dynamic programming (DP). As implementation techniques, acceleration by beam search and excessive warp suppression by penalty and/or range limitation are investigated. Experimental results show that this method provides successful warpings between images. 1.
Elastic Image Matching is NPComplete
 Pattern Recognition Letters
, 2003
"... One fundamental problem in image recognition is to establish the resemblance of two images. This can be done by searching the best pixel to pixel mapping taking into account monotonicity and continuity constraints. We show that this problem is NPcomplete by reduction from 3SAT, thus giving evidence ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
(Show Context)
One fundamental problem in image recognition is to establish the resemblance of two images. This can be done by searching the best pixel to pixel mapping taking into account monotonicity and continuity constraints. We show that this problem is NPcomplete by reduction from 3SAT, thus giving evidence that the known exponential time algorithms are justi ed, but approximation algorithms or simpli cations are necessary.
A DP based Search Algorithm for Statistical Machine Translation
, 1998
"... We introduce a novel search algorithm for statistical machine translation based on dynamic programming (DP). During the search process two statistical knowledge sources are combined: a translation model and a bigram language model. This search algorithm expands hypotheses along the positions of the ..."
Abstract

Cited by 33 (17 self)
 Add to MetaCart
We introduce a novel search algorithm for statistical machine translation based on dynamic programming (DP). During the search process two statistical knowledge sources are combined: a translation model and a bigram language model. This search algorithm expands hypotheses along the positions of the target string while guaranteeing progressive coverage of the words in the source string. We present experimental results on the Verbmobil task.
Startsynchronous search for large vocabulary continuous speech recognition
 IEEE Trans. Speech and Audio Processing
"... Abstract — In this paper, we present a novel, efficient search strategy for large vocabulary continuous speech recognition. The search algorithm, based on a stack decoder framework, utilizes phonelevel posterior probability estimates (produced by a connectionist/hidden Markov model acoustic model) ..."
Abstract

Cited by 20 (10 self)
 Add to MetaCart
(Show Context)
Abstract — In this paper, we present a novel, efficient search strategy for large vocabulary continuous speech recognition. The search algorithm, based on a stack decoder framework, utilizes phonelevel posterior probability estimates (produced by a connectionist/hidden Markov model acoustic model) as a basis for phone deactivation pruning—a highly efficient method of reducing the required computation. The singlepass algorithm is naturally factored into the timeasynchronous processing of the word sequence and the timesynchronous processing of the hidden Markov model state sequence. This enables the search to be decoupled from the language model while still maintaining the computational benefits of timesynchronous processing. The incorporation of the language model in the search is discussed and computationally cheap approximations to the full language model are introduced. Experiments were performed on the North American Business News task using a 60 000 word vocabulary and a trigram language model. Results indicate that the computational cost of the search may be reduced by more than a factor of 40 with a relative search error of less than 2 % using the techniques discussed in the paper. Index Terms — Hidden Markov model, large vocabulary continuous speech recognition, phone deactivation pruning, search, stack decoding. I.
Reconstruction Of Incomplete Spectrograms For Robust Speech Recognition
, 2000
"... The performance of automatic speech recognition (ASR) systems degrades greatly when speech is corrupted by noise. Missing feature methods attempt to reduce this degradation by deleting components of a timefrequency representation of speech (such as a spectrogram) that exhibit low signaltonoise ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
The performance of automatic speech recognition (ASR) systems degrades greatly when speech is corrupted by noise. Missing feature methods attempt to reduce this degradation by deleting components of a timefrequency representation of speech (such as a spectrogram) that exhibit low signaltonoise ratio (SNR). Recognition is then performed using only the remaining components of the incomplete spectrogram. These methods have been shown to result in recognition accuracies that are very robust to the effects of additive noise. However, conventional missing feature methods, which modify the classifier used to perform the recognition, suffer from the drawback that they are constrained to use the logspectral vectors of the spectrogram as features for recognition. It is well known recognition systems that use logspectral features perform poorly compared to systems that use cepstral features. In this
A Word Graph Based NBest Search in Continuous Speech Recognition
, 1996
"... In this paper, weintroduce an e#cient algorithm for the exhaustive search of N best sentence hypotheses in a word graph. The search procedure is based on a twopass algorithm. In the #rst pass, a word graph is constructed with standard timesynchronous beam search. The actual extraction of N best wo ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
In this paper, weintroduce an e#cient algorithm for the exhaustive search of N best sentence hypotheses in a word graph. The search procedure is based on a twopass algorithm. In the #rst pass, a word graph is constructed with standard timesynchronous beam search. The actual extraction of N best word sequences from the word graph takes place during the second pass.