• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 65
Next 10 →

ACCENT ADAPTATION USING SUBSPACE GAUSSIAN MIXTURE MODELS

by Petr Motlicek, Philip N. Garner, Namhoon Kim, Jeongmi Cho
"... This paper investigates employment of Subspace Gaussian Mixture Models (SGMMs) for acoustic model adaptation to-wards different accents for English speech recognition. The SGMMs comprise globally-shared and state-specific param-eters which can efficiently be employed for various kinds of acoustic pa ..."
Abstract - Add to MetaCart
This paper investigates employment of Subspace Gaussian Mixture Models (SGMMs) for acoustic model adaptation to-wards different accents for English speech recognition. The SGMMs comprise globally-shared and state-specific param-eters which can efficiently be employed for various kinds of acoustic

Gaussian Mixture Models

by Daniel Povey, Mohit Agarwal, Pinar Akyazi, Kai Feng, Arnab Ghoshal, Nagendra Kumar Goel, Ariya Rastrow, Richard C. Rose, Petr Schwarz, Samuel Thomas
"... We describe an acoustic modeling approach in which all phonetic states share a common Gaussian Mixture Model structure, and the means and mixture weights vary in a subspace of the total parameter space. We call this a Subspace Gaussian Mixture Model (SGMM). Globally shared parameters define the subs ..."
Abstract - Add to MetaCart
We describe an acoustic modeling approach in which all phonetic states share a common Gaussian Mixture Model structure, and the means and mixture weights vary in a subspace of the total parameter space. We call this a Subspace Gaussian Mixture Model (SGMM). Globally shared parameters define

Cross-Lingual Subspace Gaussian Mixture Models for Low-Resource Speech Recognition

by Liang Lu, Arnab Ghoshal , Steve Renals - IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 28(7):1116–1126, 2010 , 2013
"... This paper studies cross-lingual acoustic modelling in the context of subspace Gaussian mixture models (SGMMs). SGMMs factorize the acoustic model parameters into a set that is globally shared between all the states of a hidden Markov model (HMM) and another that is specific to the HMM states. We de ..."
Abstract - Add to MetaCart
This paper studies cross-lingual acoustic modelling in the context of subspace Gaussian mixture models (SGMMs). SGMMs factorize the acoustic model parameters into a set that is globally shared between all the states of a hidden Markov model (HMM) and another that is specific to the HMM states. We

Modeling With A Subspace Constraint On Inverse Covariance Matrices

by Scott Axelrod, Ramesh Gopinath, Peder Olsen - in Proc. ICSLP , 2002
"... We consider a family of Gaussian mixture models for use in HMM based speech recognition system. These "SPAM" models have state independent choices of subspaces to which the precision (inverse covariance) matrices and means are restricted to belong. They provide a flexible tool for robust, ..."
Abstract - Cited by 38 (9 self) - Add to MetaCart
We consider a family of Gaussian mixture models for use in HMM based speech recognition system. These "SPAM" models have state independent choices of subspaces to which the precision (inverse covariance) matrices and means are restricted to belong. They provide a flexible tool for robust

SPEAKER VERIFICATIONWITH THE MIXTURE OF GAUSSIAN FACTOR ANALYSIS BASED REPRESENTATION

by Ming Li
"... This paper presents a generalized i-vector representation frame-work using the mixture of Gaussian (MoG) factor analysis for speaker verification. Conventionally, a single standard factor anal-ysis is adopted to generate a low rank total variability subspace where the mean supervector is assumed to ..."
Abstract - Add to MetaCart
This paper presents a generalized i-vector representation frame-work using the mixture of Gaussian (MoG) factor analysis for speaker verification. Conventionally, a single standard factor anal-ysis is adopted to generate a low rank total variability subspace where the mean supervector is assumed

Subspace Communication

by Josep Font-segura, Prof Gregori , 2014
"... We are surrounded by electronic devices that take advantage of wireless technologies, from our computer mice, which require little amounts of information, to our cellphones, which demand increasingly higher data rates. Until today, the coexistence of such a variety of services has been guaranteed by ..."
Abstract - Add to MetaCart
We are surrounded by electronic devices that take advantage of wireless technologies, from our computer mice, which require little amounts of information, to our cellphones, which demand increasingly higher data rates. Until today, the coexistence of such a variety of services has been guaranteed by a fixed assignment of spectrum resources by regulatory agencies. This has resulted into a blind alley, as current wireless spectrum has become an expensive and a scarce resource. However, recent measurements in dense areas paint a very different picture: there is an actual underutilization of the spectrum by legacy sys-tems. Cognitive radio exhibits a tremendous promise for increasing the spectral efficiency for future wireless systems. Ideally, new secondary users would have a perfect panorama of the spectrum usage, and would opportunistically communicate over the available re-sources without degrading the primary systems. Yet in practice, monitoring the spectrum resources, detecting available resources for opportunistic communication, and transmit-ting over the resources are hard tasks. This thesis addresses the tasks of monitoring, de-

Product of Gaussians for speech recognition

by M. J. F. Gales, S. S. Airey - Computer Speech & Language , 2003
"... 1 Introduction Mixture of Gaussians (MoG) are commonly used as the state representation in hidden Markov model (HMM) based speech recognition. These Gaussian mixture models are easy to train using expectation maximisation (EM) techniques [4] and are able to approximate any distribution given a suffi ..."
Abstract - Cited by 13 (2 self) - Add to MetaCart
1 Introduction Mixture of Gaussians (MoG) are commonly used as the state representation in hidden Markov model (HMM) based speech recognition. These Gaussian mixture models are easy to train using expectation maximisation (EM) techniques [4] and are able to approximate any distribution given a

MODELING WITH A SUBSPACE CONSTRAINT ON INVERSE COVARIANCE MATRICES

by unknown authors
"... We consider a family of Gaussian mixture models for use in HMM based speech recognition system. These “SPAM ” models have state independent choices of subspaces to which the precision (inverse covariance) matrices and means are restricted to belong. They provide a flexible tool for robust, compact, ..."
Abstract - Add to MetaCart
We consider a family of Gaussian mixture models for use in HMM based speech recognition system. These “SPAM ” models have state independent choices of subspaces to which the precision (inverse covariance) matrices and means are restricted to belong. They provide a flexible tool for robust, compact

Subspace Distribution Clustering For Continuous Observation Density Hidden Markov Models

by Enrico Bocchieri, Brian Mak - In Proceedings of Eurospeech , 1997
"... This paper presents an efficient approximation of the Gaussian mixture state probability density functions of continuous observation density hidden Markov models (CHMM 's). In CHMM 's, the Gaussian mixtures carry a high computational cost, which amounts to a significant fraction (e.g. 30% ..."
Abstract - Cited by 14 (5 self) - Add to MetaCart
This paper presents an efficient approximation of the Gaussian mixture state probability density functions of continuous observation density hidden Markov models (CHMM 's). In CHMM 's, the Gaussian mixtures carry a high computational cost, which amounts to a significant fraction (e.g. 30

Extension of the Sliced Gaussian Mixture Filter with Application to Cooperative Passive Target Tracking

by Julian Hörst, Felix Sawo, Vesa Klumpp, Uwe D. Hanebeck, Dietrich Fränken
"... This paper copes with the problem of nonlinear Bayesian state estimation. A nonlinear filter, the Sliced Gaussian Mixture Filter (SGMF), employs linear substructures in the nonlinear measurement and prediction model in order to simplify the estimation process. Here, a special density representation ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
This paper copes with the problem of nonlinear Bayesian state estimation. A nonlinear filter, the Sliced Gaussian Mixture Filter (SGMF), employs linear substructures in the nonlinear measurement and prediction model in order to simplify the estimation process. Here, a special density
Next 10 →
Results 1 - 10 of 65
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University