Results 1  10
of
12
From HMM's to Segment Models: A Unified View of Stochastic Modeling for Speech Recognition
, 1996
"... ..."
Statistical Techniques for Language Recognition: An Introduction and Guide for Cryptanalysts
 Cryptologia
, 1993
"... We explain how to apply statistical techniques to solve several languagerecognition problems that arise in cryptanalysis and other domains. Language recognition is important in cryptanalysis because, among other applications, an exhaustive key search of any cryptosystem from ciphertext alone requir ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
We explain how to apply statistical techniques to solve several languagerecognition problems that arise in cryptanalysis and other domains. Language recognition is important in cryptanalysis because, among other applications, an exhaustive key search of any cryptosystem from ciphertext alone requires a test that recognizes valid plaintext. Written for cryptanalysts, this guide should also be helpful to others as an introduction to statistical inference on Markov chains. Modeling language as a finite stationary Markov process, we adapt a statistical model of pattern recognition to language recognition. Within this framework we consider four welldefined languagerecognition problems: 1) recognizing a known language, 2) distinguishing a known language from uniform noise, 3) distinguishing unknown 0thorder noise from unknown 1storder language, and 4) detecting nonuniform unknown language. For the second problem we give a most powerful test based on the NeymanPearson Lemma. For the oth...
A Parallel Implementation of a Hidden Markov Model with Duration Modeling for Speech Recognition
, 1995
"... Hidden Markov models (HMMs) are currently the most successful paradigm for speech recognition. Although explicit duration continuous HMMs more accurately model speech than HMMs with implicit duration modeling, the cost of accurate duration modeling is often considered prohibitive. This paper describ ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
Hidden Markov models (HMMs) are currently the most successful paradigm for speech recognition. Although explicit duration continuous HMMs more accurately model speech than HMMs with implicit duration modeling, the cost of accurate duration modeling is often considered prohibitive. This paper describes a parallel implementation of an HMM with explicit duration modeling for spoken language recognition on the MasPar MP1. The MP1 is a finegrained SIMD architecture with 16384 processing elements (PEs) arranged in a 128x128 mesh. By exploiting the massive parallelism of explicit duration HMMs, development and testing is practical even for large amounts of data. The result of this work is a parallel speech recognizer that can train a phone recognizer in real time. We present several extensions that include context dependent modeling, word recognition, and implicit duration HMMs. 1 Introduction While hidden Markov models (HMMs) have been a popular and effective method of recognizing spoken...
HMMBased semantic Learning for a mobile robot
, 2004
"... We are developing a intelligent robot and attempting to teach it language. While there are many aspects of this research, for the purposes of this dissertation the most important are the following ideas. Language is primarily based on semantics, not syntax, which is the focus in speech recognition r ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
We are developing a intelligent robot and attempting to teach it language. While there are many aspects of this research, for the purposes of this dissertation the most important are the following ideas. Language is primarily based on semantics, not syntax, which is the focus in speech recognition research these days. To truly learn meaning, a language engine cannot simply be a computer program running on a desktop computer analyzing speech. It must be part of a more general, embodied intelligent system, one capable of using associative learning to form concepts from the perception of experiences in the world, and further capable of manipulating those concepts symbolically. This dissertation explores the use of hidden Markov models (HMMs) in this capacity. HMMs are capable of automatically learning and extracting the underlying structure of continuousvalued inputs and representing that structure in the states of the model. These states can then be treated as symbolic representations of the inputs. We show how a model consisting of a cascade of HMMs can be embedded in a small mobile robot and used to learn correlations among sensory inputs to create symbolic concepts, which can eventually be manipulated linguistically and used for decision making.
Likelihood Based Statistical Inference in Hidden Markov Models
, 1999
"... The hidden Markov model (HMM) is one type of stochastic signal model which is widely used in modeling and classification problems. Yet the more advanced statistical inference in this model has been omitted almost in all applications. In this paper we show how to calculate in practise the likelihood ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
The hidden Markov model (HMM) is one type of stochastic signal model which is widely used in modeling and classification problems. Yet the more advanced statistical inference in this model has been omitted almost in all applications. In this paper we show how to calculate in practise the likelihood based condence intervals for model parameters and further for the probability of a new case to be classified, and how these intervals can be used to provide some useful insights into the HMM. First the confidence intervals for the values of the model parameters tell the sufficency of the sample data in the modeling problem. In addition, the confidence intervals for the probabilities of a new case tell the uncertainty of the classification based on the pure probability in classification problem. We show in detail how to compute two different confidence intervals, namely the Wald's and the profile likelihood intervals. We also demonstrate and compare the results of the two approaches in a real example of classification of nasal flow shapes. We found out that this kind of statistical inference of HMMs is very useful and informative, and we recommend that it should be used regulary in the applications of HMMs.
2D shape recognition by Hidden Markov Models
"... In Computer Vision, twodimensional shape classifcation is a complex and well studied topic, often basic for threedimensional object recognition. Object contours are a widely chosen feature for representing objects, useful in many respects for classifcation problems. In this paper; we address the u ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
In Computer Vision, twodimensional shape classifcation is a complex and well studied topic, often basic for threedimensional object recognition. Object contours are a widely chosen feature for representing objects, useful in many respects for classifcation problems. In this paper; we address the use of Hidden Markov Models (HMMs) for shape analysis, based on chain code representation of object contours. HMMs represent a widespread approach to the modeling of sequences, and are largely used for many applications, but unfortunately it is poorly considered in literature concerning shape analysis, and, in any case, without reference on noise or occlusion sensitivi9. In this paper HMM approach to shape modeling is tested, probing good invariance of this method in term of noise, occlusions, and object scaling. 1
Enhancement of connected words in an extremely noisy environment
 IEEE Transactions on Speech and Audio Processing 5 (2
, 1997
"... Abstract — A speech enhancement algorithm that is based on a connectedword hidden Markov model (HMM) is developed. Speech is assumed to be highly degraded by statistically independent additive noise. The minimum mean square error estimator is derived for a connectedword HMM. Further, we derive an ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract — A speech enhancement algorithm that is based on a connectedword hidden Markov model (HMM) is developed. Speech is assumed to be highly degraded by statistically independent additive noise. The minimum mean square error estimator is derived for a connectedword HMM. Further, we derive an estimator based on a connectedword HMM with explicit state duration. Listening experiments performed with digit strings have shown an increase of intelligibility. The best results were achieved when subjects who listened to the enhanced speech were given the results of an automatic recognition system. Index Terms — Noise reduction, robustness in the presence of noise, speech recognition I.
A Robotic Framework for Semantic Concept Learning
, 2004
"... is developing a intelligent robot, and attempting to teach it language. While there are many aspects of this research, for the purposes of this report the most important are the following ideas. Language is primarily based on semantics, not syntax. To truly learn meaning, the language engine must be ..."
Abstract
 Add to MetaCart
(Show Context)
is developing a intelligent robot, and attempting to teach it language. While there are many aspects of this research, for the purposes of this report the most important are the following ideas. Language is primarily based on semantics, not syntax. To truly learn meaning, the language engine must be part of an embodied intelligent system, one capable of using associative learning to form concepts from the perception of experiences in the world, and further capable of manipulating those concepts symbolically. In the work described here, we explore the use of hidden Markov models (HMMs) in this capacity. HMMs are capable of automatically learning and extracting the underlying structure of continuousvalued inputs and representing that structure in the states of the model. These states can then be treated as symbolic representations of the inputs. We describe a composite model consisting of a cascade of HMMs that can be embedded in a small mobile robot and used to learn correlations among sensory inputs to create symbolic concepts. These symbols can then be manipulated linguistically and used for decision making.
VECTOR QUANTIZATION WITH MEMORY AND MULTILABELING FOR ISOLATED VIDEOONLY AUTOMATIC SPEECH RECOGNITION
"... We describe a vector quantizer (VQ) with memory for automatic speech recognition (ASR) and compare the recognition performance results to those obtained with traditional memoryless VQ for ASR. Standard VQ for ASR quantizes the speech data independently of any past information. We introduce memory ..."
Abstract
 Add to MetaCart
(Show Context)
We describe a vector quantizer (VQ) with memory for automatic speech recognition (ASR) and compare the recognition performance results to those obtained with traditional memoryless VQ for ASR. Standard VQ for ASR quantizes the speech data independently of any past information. We introduce memory in a probabilistic framework for quantization state modeling. This is accomplished in the form of an ergodic hidden Markov model (HMM) in which the state occupied by the HMM represents the quantization label. We evaluate this approach in the context of videoonly isolated digit ASR and implement both single stream (single labeling) and multistream (multilabeling) systems. For single stream recognition, our approach increases the recognition rate from 62.67 % to 66.95%. When using multilabeling, our proposed vector quantizer with memory consistently outperforms the memoryless vector quantizer.