Results 1  10
of
29
From HMM's to Segment Models: A Unified View of Stochastic Modeling for Speech Recognition
, 1996
"... ..."
What HMMs can do
, 2002
"... Since their inception over thirty years ago, hidden Markov models (HMMs) have have become the predominant methodology for automatic speech recognition (ASR) systems — today, most stateoftheart speech systems are HMMbased. There have been a number of ways to explain HMMs and to list their capabil ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
Since their inception over thirty years ago, hidden Markov models (HMMs) have have become the predominant methodology for automatic speech recognition (ASR) systems — today, most stateoftheart speech systems are HMMbased. There have been a number of ways to explain HMMs and to list their capabilities, each of these ways having both advantages and disadvantages. In an effort to better understand what HMMs can do, this tutorial analyzes HMMs by exploring a novel way in which an HMM can be defined, namely in terms of random variables and conditional independence assumptions. We prefer this definition as it allows us to reason more throughly about the capabilities of HMMs. In particular, it is possible to deduce that there are, in theory at least, no theoretical limitations to the class of probability distributions representable by HMMs. This paper concludes that, in search of a model to supersede the HMM for ASR, we should rather than trying to correct for HMM limitations in the general case, new models should be found based on their potential for better parsimony, computational requirements, and noise insensitivity.
Trajectory Modeling based on HMMs with the Explicit Relashinship between Static and Dynamic Features
, 2002
"... This paper shows that the HMM whose state output vector includes static and dynamic feature parameters can be reformulated as a trajectory model by imposing the explicit relationship between the static and dynamic features. The derived model, named trajectory HMM, can alleviate the limitations of HM ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
This paper shows that the HMM whose state output vector includes static and dynamic feature parameters can be reformulated as a trajectory model by imposing the explicit relationship between the static and dynamic features. The derived model, named trajectory HMM, can alleviate the limitations of HMMs: i) constant statistics within an HMM state and ii) independence assumption of state output probabilities. We also derive a Viterbitype training algorithm for the trajectory HMM. A preliminary speech recognition experiment based on Nbest rescoring demonstrates that the training algorithm can improve the recognition performance significantly even though the trajectory HMM has the same parameterization as the standard HMM.
The double chain Markov model
 Comm Stat Theor Meths
, 1999
"... Among the class of discrete time Markovian processes, two models are widely used, the Markov chain and the Hidden Markov Model. A major di erence between these two models lies in the relation between successive outputs of the observed variable. In a visible Markov chain, these are directly correlate ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Among the class of discrete time Markovian processes, two models are widely used, the Markov chain and the Hidden Markov Model. A major di erence between these two models lies in the relation between successive outputs of the observed variable. In a visible Markov chain, these are directly correlated while in hidden models they are not. However, in some situations it is possible to observe both a hidden Markov chain and a direct relation between successive observed outputs. Unfortunately, the use of either a visible or a hidden model implies the suppression of one of these hypothesis. This paper presents a Markovian model called the Double Chain Markov Model which takes into account the main features of both visible and hidden models. Its main purpose is the modeling of nonhomogeneous timeseries. It is very exible and can be estimated with traditional methods. The model is applied on a sequence of wind speeds and it appears to
A Viterbi algorithm for a trajectory model derived from HMM with explicit relationship between static and dynamic features
 in Proc. of ICASSP 2004
"... This paper introduces a Viterbi algorithm to obtain a suboptimal state sequence for trajectoryHMM, which is derived from HMM with explicit relationship between static and dynamic features. The trajectoryHMM can alleviate some limitations of HMM, which are i) constant statistics within HMM state a ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
This paper introduces a Viterbi algorithm to obtain a suboptimal state sequence for trajectoryHMM, which is derived from HMM with explicit relationship between static and dynamic features. The trajectoryHMM can alleviate some limitations of HMM, which are i) constant statistics within HMM state and ii) conditional independence of observations given the state sequence, without increasing the number of model parameters. The proposed algorithm was applied to stateboundary optimization for Viterbi training and Nbest rescoring. In speakerdependent continuous speech recognition experiment, trajectoryHMM with the proposed algorithm achieved about 14 % error reduction over the standard HMM with the conventional Viterbi algorithm. 1.
Model Parameter Estimation for Mixture Density Polynomial Segment Models
 Int. Conf. in Acoustics, Speech and Signal Processing
, 1997
"... In this paper, we propose parameter estimation techniques for mixture density polynomial segment models (henceforth MDPSM) where their trajectories are specified with an arbitrary regression order. MDPSM parameters can be trained in one of three different ways : (1) segment clustering, (2) expectati ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
In this paper, we propose parameter estimation techniques for mixture density polynomial segment models (henceforth MDPSM) where their trajectories are specified with an arbitrary regression order. MDPSM parameters can be trained in one of three different ways : (1) segment clustering, (2) expectation maximization (EM) training of mean trajectories, or (3) EM training of mean and variance trajectories. These parameter estimation methods were evaluated in TIMIT vowel classification experiments. The experimental results showed that modeling both the mean and variance trajectories are consistently superior to modeling only the mean trajectory. We also found that modeling both trajectories results in significant improvements over the conventional HMM. 1. INTRODUCTION To date, one of the most successful approaches for large vocabulary continuous speech recognition has been based on the hidden Markov model (HMM). Although HMMs will continue to play an important role in most recognition sys...
Highorder extensions of the double chain Markov model
 Stoch. Models
, 2002
"... The Double Chain Markov Model is a fully Markovian model for the representation of timeseries in random environment. In this article, we showthat it can handle transitions of highorder between both a set of obsevations and a set of hidden states. In order to reduce the number of parameters, each t ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
The Double Chain Markov Model is a fully Markovian model for the representation of timeseries in random environment. In this article, we showthat it can handle transitions of highorder between both a set of obsevations and a set of hidden states. In order to reduce the number of parameters, each transition matrix can be replaced by a Mixture Transition Model. We provide a complete derivation of the algorithms needed to compute the model. Three applications, the analysis of a sequence of DNA, the song of the wood pewee and the behavior of young monkeys, show that this model is of great interest for the representation of data which can be decomposed
Reformulating the HMM as a trajectory model
 In Proceedings of Beyond HMM Workshop on
, 2004
"... Abstract We have shown that the HMM whose state output vector includes static and dynamic feature parameters can be reformulated as a trajectory model by imposing the explicit relationship between the static and dynamic features. The derived model, referred to as “trajectory HMM, ” can alleviate th ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Abstract We have shown that the HMM whose state output vector includes static and dynamic feature parameters can be reformulated as a trajectory model by imposing the explicit relationship between the static and dynamic features. The derived model, referred to as “trajectory HMM, ” can alleviate the limitations of HMMs: i) constant statistics within an HMM state and ii) independence assumption of state output probabilities. In this paper, we first summarize the definition and the training algorithm. Then, to show that the trajectory HMM is a proper generative model, we derive a new algorithm for sampling from the trajectory model, and show the result of an illustrative experiment. A speech recognition experiment demonstrates the consistency between training and decoding criteria is essential: the model should not only be traind as a trajectory model but also be used as a trajectory model in decoding, even though the trajectory model has the same parameterization as the standard HMM. Key words HMM, speech recognition, speech synthesis, trajectory model, dynamic feature 1.
Life course data in demography and social sciences: Statistical and data mining approaches
 In
, 2005
"... This paper has essentially a methodological purpose. In a first section, we shortly explain why demographers have been relatively reluctant to implement the life course paradigm and methods, while the quantitative focus and the concepts of demographic analysis a priori favored such implementation. A ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
This paper has essentially a methodological purpose. In a first section, we shortly explain why demographers have been relatively reluctant to implement the life course paradigm and methods, while the quantitative focus and the concepts of demographic analysis a priori favored such implementation. A real intellectual crisis has been needed before demographers integrated the necessity to face up the challenge of shifting “from structure to process, from macro to micro, from analysis to synthesis, from certainty to uncertainty ” (Willekens, 1999, p. 26). This retrospective look also shows impressive progresses to promote a real interdisciplinarity in population studies, knotting the ties between demography and the social sciences. However, we also note that the success of multivariate causal analyses has been so rapid that some pitfalls are not always avoided. In Section 2, we focus on statistical methods for studying transitions. First, readers mind is refreshed about regression like models, and then we
A Maximumentropy Solution to the Framedependency Problem in Speech Recognition
, 2001
"... The HMM assumption of conditional independence of observations causes a variety of problems for speechrecognition applications. Previous attempts to construct acoustic models that remove this assumption have suffered from a significant increase in the number of parameters to train. Another weakness ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The HMM assumption of conditional independence of observations causes a variety of problems for speechrecognition applications. Previous attempts to construct acoustic models that remove this assumption have suffered from a significant increase in the number of parameters to train. Another weakness of current acoustic models is that they do not account for the origin of derived features (estimated derivatives). We show how to both remove the independence assumption and properly account for derived features, with little or no increase in the number of parameters to train, by applying the principle of maximum entropy. We also show that ignoring the origins of derived features in training HMM acoustic models can lead to severe distortions of the effective language model. Evaluation of our maxent model on a simple problem cuts an alreadylow error rate in half compared to an equivalent HMM with the same number of parameters.