## What HMMs can do (2002)

### Cached

### Download Links

- [www.ee.columbia.edu]
- [www.ee.washington.edu]
- [www.ee.washington.edu]
- [sage.math.washington.edu]
- [www.ee.columbia.edu]
- [www.ee.columbia.edu]
- DBLP

### Other Repositories/Bibliography

Citations: | 30 - 4 self |

### BibTeX

@TECHREPORT{Bilmes02whathmms,

author = {Jeff Bilmes},

title = {What HMMs can do},

institution = {},

year = {2002}

}

### Years of Citing Articles

### OpenURL

### Abstract

Since their inception over thirty years ago, hidden Markov models (HMMs) have have become the predominant methodology for automatic speech recognition (ASR) systems — today, most state-of-the-art speech systems are HMM-based. There have been a number of ways to explain HMMs and to list their capabilities, each of these ways having both advantages and disadvantages. In an effort to better understand what HMMs can do, this tutorial analyzes HMMs by exploring a novel way in which an HMM can be defined, namely in terms of random variables and conditional independence assumptions. We prefer this definition as it allows us to reason more throughly about the capabilities of HMMs. In particular, it is possible to deduce that there are, in theory at least, no theoretical limitations to the class of probability distributions representable by HMMs. This paper concludes that, in search of a model to supersede the HMM for ASR, we should rather than trying to correct for HMM limitations in the general case, new models should be found based on their potential for better parsimony, computational requirements, and noise insensitivity.