Results 1  10
of
330
Maximum Likelihood Linear Transformations for HMMBased Speech Recognition
 COMPUTER SPEECH AND LANGUAGE
, 1998
"... This paper examines the application of linear transformations for speaker and environmental adaptation in an HMMbased speech recognition system. In particular, transformations that are trained in a maximum likelihood sense on adaptation data are investigated. Other than in the form of a simple bias ..."
Abstract

Cited by 570 (68 self)
 Add to MetaCart
) constrained, which requires the variance transform to have the same form as the mean transform (sometimes referred to as featurespace transforms). Reestimation formulae for all appropriate cases of transform are given. This includes a new and efficient "full" variance transform and the extension
MultipleCluster Adaptive Training Schemes
 IN PROC. ICASSP
, 2001
"... This paper examines the training of multiplecluster systems using adaptive training schemes. Various forms of transformation and canonical model are described in a consistent framework allowing reestimation formulae for all cases to be simply derived. Initial experiments using these various scheme ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
This paper examines the training of multiplecluster systems using adaptive training schemes. Various forms of transformation and canonical model are described in a consistent framework allowing reestimation formulae for all cases to be simply derived. Initial experiments using these various
Mean and Variance Adaptation within the MLLR Framework
 Computer Speech & Language
, 1996
"... One of the key issues for adaptation algorithms is to modify a large number of parameters with only a small amount of adaptation data. Speaker adaptation techniques try to obtain near speaker dependent (SD) performance with only small amounts of speaker specific data, and are often based on initi ..."
Abstract

Cited by 145 (15 self)
 Add to MetaCart
Gaussian HMM systems. In this paper MLLR is extended to also update the Gaussian variances and reestimation formulae are derived for these variance transforms. MLLR with variance compensation is evaluated on several large vocabulary recognition tasks. The use of mean and variance MLLR adaptation
Regularisation in the Selection of Radial Basis Function Centres
 NEURAL COMPUTATION
, 1995
"... Subset selection and regularisation are two well known techniques which can improve the generalisation performance of nonparametric linear regression estimators, such as radial basis function networks. This paper examines regularised forward selection (RFS)  a combination of forward subset selecti ..."
Abstract

Cited by 43 (7 self)
 Add to MetaCart
selection and zeroorder regularisation. An efficient implementation of RFS into which either delete1 or generalised crossvalidation can be incorporated and a reestimation formula for the regularisation parameter are also discussed. Simulation studies are presented which demonstrate improved
An EM Algorithm for Regularized RBF Networks
"... We investigate the use of maximum marginal likelihood of the data to determine some of the critical parameters of a radial basis function neural network applied to a regression problem. The expectationmaximisation algorithm leads to useful reestimation formulae for both the noise variance and the ..."
Abstract
 Add to MetaCart
We investigate the use of maximum marginal likelihood of the data to determine some of the critical parameters of a radial basis function neural network applied to a regression problem. The expectationmaximisation algorithm leads to useful reestimation formulae for both the noise variance
Radial Basis Functions: a Bayesian treatment
 in: Advances in Neural Information Processing Systems 10
, 1997
"... Bayesian methods have been successfully applied to regression and classi cation problems in multilayer perceptrons. We present a novel application of Bayesian techniques to Radial Basis Function networks by developing a Gaussian approximation to the posterior distribution which, for xed basis funct ..."
HIDDEN MARKOV MODEL FRAMEWORK USING INDEPENDENT COMPONENT ANALYSIS MIXTURE MODEL
"... This paper describes a novel method for the analysis of sequential data that exhibits strong nonGaussianities. In particular, we extend the classical continuous hidden Markov model (HMM) by modeling the observation densities as a mixture of nonGaussian distributions. In order to obtain a parametri ..."
Abstract
 Add to MetaCart
parametric representation of the densities, we apply the independent component analysis (ICA) mixture model to the observations such that each nonGaussian mixture component is associated with a standard ICA. Under this new framework, we develop the reestimation formulas for the three fundamental HMM
WAVELETBASED NONPARAMETRIC HMM’S: THEORY AND APPLICATIONS
"... In this paper, we propose a new algorithm for nonparametric estimation of hidden Markov models (HMM’s). The algorithm is based on a “waveletshrinkage ” density estimator for the stateconditional probability density functions of the HMM’s. It operates in an iterative fashion, similar to the EM ree ..."
Abstract
 Add to MetaCart
In this paper, we propose a new algorithm for nonparametric estimation of hidden Markov models (HMM’s). The algorithm is based on a “waveletshrinkage ” density estimator for the stateconditional probability density functions of the HMM’s. It operates in an iterative fashion, similar to the EM reestimation
Adaptive Training Using Structured Transforms
 in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process
, 2004
"... Adaptive training is an important approach to train speech recognition systems on found, nonhomogeneous, data. Standard adaptive training employs a single transform to represent unwanted acoustic variability for an utterance. A canonical model representing only the inherent speech variability may t ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
framework. Two forms of transform are considered, cluster mean interpolation and constrained MLLR. Reestimation formulae for estimating the canonical model using both maximum likelihood and minimum phone error training are presented. Experiments to compare ST to standard adaptive training schemes were
Relax Frame Independence Assumption For Standard Hmms By State Dependent AutoRegressive Feature Models
 in ICASSP 2001 Proceedings
"... In this paper, we propose a new type of framebased hidden Markov models (HMMs), in which a sequence of observations are generated using statedependent autoregressive feature models. Based on this correlation model, it can be proved that expressing the probability of a sequence of observations as a ..."
Abstract
 Add to MetaCart
as a product of probabilities of decorrelated individual observations doesn't require the assumption of frame independence. Under the maximum likelihood (ML) criteria, we also derived reestimation formulae for the parameters (mean vectors, covariance matrix, and diagonal regression matrice
Results 1  10
of
330