Results 1 
4 of
4
A modified BaumWelch algorithm for hidden Markov models with multiple observation spaces
 IEEE Trans. Speech and Audio
, 2001
"... Abstract—In this paper, we derive an algorithm similar to the wellknown Baum–Welch algorithm for estimating the parameters of a hidden Markov model (HMM). The new algorithm allows the observation PDF of each state to be defined and estimated using a different feature set. We show that estimating pa ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we derive an algorithm similar to the wellknown Baum–Welch algorithm for estimating the parameters of a hidden Markov model (HMM). The new algorithm allows the observation PDF of each state to be defined and estimated using a different feature set. We show that estimating parameters in this manner is equivalent to maximizing the likelihood function for the standard parameterization of the HMM defined on the input data space. The processor becomes optimal if the statedependent feature sets are sufficient statistics to distinguish each state individually from a common state.
The PDF projection Theorem and the ClassSpecific Method
 IEEE Trans. on Signal Processing
, 2003
"... Abstract—In this paper, we present the theoretical foundation for optimal classification using classspecific features and provide examples of its use. A new probability density function (PDF) projection theorem makes it possible to project probability density functions from a lowdimensional featu ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we present the theoretical foundation for optimal classification using classspecific features and provide examples of its use. A new probability density function (PDF) projection theorem makes it possible to project probability density functions from a lowdimensional feature space back to the raw data space. Anary classifier is constructed by estimating the PDFs of classspecific features, then transforming each PDF back to the raw data space where they can be fairly compared. Although statistical sufficiency is not a requirement, the classifier thus constructed will become equivalent to the optimal Bayes classifier if the features meet sufficiency requirements individually for each class. This classifier is completely modular and avoids the dimensionality curse associated with large complex problems. By recursive application of the projection theorem, it is possible to analyze complex signal processing chains. We apply the method to feature sets including linear functions of independent random variables, cepstrum, and MEL cepstrum. In addition, we demonstrate how it is possible to automate the feature and model selection process by direct comparison of loglikelihood values on the common raw data domain. Index Terms—Bayesian classification, classdependent features, classification, classspecific features, hidden Markov models, maximum likelihood estimation, pattern classification, PDF estimation, probability density function.
The ClassSpecific Classifier: Avoiding the Curse of Dimensionality
 IEEE Aerosp. Electron. Syst. Mag
, 2002
"... this article is to introduce the reader to the basic principles of classification with classspecific features. It is written both for readers interested in only the basic concepts as well as those interested in getting started in applying the method. For indepth coverage, the reader is referred to ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
(Show Context)
this article is to introduce the reader to the basic principles of classification with classspecific features. It is written both for readers interested in only the basic concepts as well as those interested in getting started in applying the method. For indepth coverage, the reader is referred to a more detailed article [1]
The PDF Projection Theorem and the ClassSpecific Method
"... Abstract—In this paper, we present the theoretical foundation for optimal classification using classspecific features and provide examples of its use. A new probability density function (PDF) projection theorem makes it possible to project probability density functions from a lowdimensional featur ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—In this paper, we present the theoretical foundation for optimal classification using classspecific features and provide examples of its use. A new probability density function (PDF) projection theorem makes it possible to project probability density functions from a lowdimensional feature space back to the raw data space. Anary classifier is constructed by estimating the PDFs of classspecific features, then transforming each PDF back to the raw data space where they can be fairly compared. Although statistical sufficiency is not a requirement, the classifier thus constructed will become equivalent to the optimal Bayes classifier if the features meet sufficiency requirements individually for each class. This classifier is completely modular and avoids the dimensionality curse associated with large complex problems. By recursive application of the projection theorem, it is possible to analyze complex signal processing chains. We apply the method to feature sets including linear functions of independent random variables, cepstrum, and MEL cepstrum. In addition, we demonstrate how it is possible to automate the feature and model selection process by direct comparison of loglikelihood values on the common raw data domain. Index Terms—Bayesian classification, classdependent features, classification, classspecific features, hidden Markov models, maximum likelihood estimation, pattern classification, PDF estimation, probability density function.