Results 1  10
of
4,184
Maximum Likelihood Linear Transformations for HMMBased Speech Recognition
 COMPUTER SPEECH AND LANGUAGE
, 1998
"... This paper examines the application of linear transformations for speaker and environmental adaptation in an HMMbased speech recognition system. In particular, transformations that are trained in a maximum likelihood sense on adaptation data are investigated. Other than in the form of a simple bias ..."
Abstract

Cited by 570 (68 self)
 Add to MetaCart
This paper examines the application of linear transformations for speaker and environmental adaptation in an HMMbased speech recognition system. In particular, transformations that are trained in a maximum likelihood sense on adaptation data are investigated. Other than in the form of a simple
ADAPTATION EXPERIMENTS ON THE SPINE DATABASE WITH THE EXTENDED MAXIMUM LIKELIHOOD LINEAR TRANSFORMATION (EMLLT) MODEL
"... This paper applies the recently proposed Extended Maximum Likelihood Linear Transformation (EMLLT) model for inverse covariances in a Speaker Adaptive Training (SAT) context. The paper adapts standard algorithms for maximum likelihood estimation of linear transforms for mean, variance and feature sp ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper applies the recently proposed Extended Maximum Likelihood Linear Transformation (EMLLT) model for inverse covariances in a Speaker Adaptive Training (SAT) context. The paper adapts standard algorithms for maximum likelihood estimation of linear transforms for mean, variance and feature
Large Vocabulary Conversational Speech Recognition With The Extended Maximum Likelihood Linear Transformation (EMLLT) Model
 in Proc. Eurospeech
, 2002
"... This paper applies the recently proposed Extended Maximum Likelihood Linear Transformation (EMLLT) model in a Speaker Adaptive Training (SAT) context on the Switchboard database. Adaptation is carried out with maximum likelihood estimation of linear transforms for the means, precisions (inverse cova ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
This paper applies the recently proposed Extended Maximum Likelihood Linear Transformation (EMLLT) model in a Speaker Adaptive Training (SAT) context on the Switchboard database. Adaptation is carried out with maximum likelihood estimation of linear transforms for the means, precisions (inverse
Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models
, 1995
"... ..."
An analysis of transformations
 Journal of the Royal Statistical Society. Series B (Methodological
, 1964
"... In the analysis of data it is often assumed that observations y,, y,,...,y, are independently normally distributed with constant variance and with expectations specified by a model linear in a set of parameters 0. In this paper we make the less restrictive assumption that such a normal, homoscedasti ..."
Abstract

Cited by 1067 (3 self)
 Add to MetaCart
, homoscedastic, linear model is appropriate after some suitable transformation has been applied to the y's. Inferences about the transformation and about the parameters of the linear model are made by computing the likelihood function and the relevant posterior distribution. The contributions of normality
Statistical Analysis of Cointegrated Vectors
 Journal of Economic Dynamics and Control
, 1988
"... We consider a nonstationary vector autoregressive process which is integrated of order 1, and generated by i.i.d. Gaussian errors. We then derive the maximum likelihood estimator of the space of cointegration vectors and the likelihood ratio test of the hypothesis that it has a given number of dimen ..."
Abstract

Cited by 2749 (12 self)
 Add to MetaCart
We consider a nonstationary vector autoregressive process which is integrated of order 1, and generated by i.i.d. Gaussian errors. We then derive the maximum likelihood estimator of the space of cointegration vectors and the likelihood ratio test of the hypothesis that it has a given number
Hierarchical mixtures of experts and the EM algorithm
, 1993
"... We present a treestructured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM’s). Learning is treated as a maximum likelihood ..."
Abstract

Cited by 885 (21 self)
 Add to MetaCart
We present a treestructured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM’s). Learning is treated as a maximum likelihood
Interior Point Methods in Semidefinite Programming with Applications to Combinatorial Optimization
 SIAM Journal on Optimization
, 1993
"... We study the semidefinite programming problem (SDP), i.e the problem of optimization of a linear function of a symmetric matrix subject to linear equality constraints and the additional condition that the matrix be positive semidefinite. First we review the classical cone duality as specialized to S ..."
Abstract

Cited by 547 (12 self)
 Add to MetaCart
to SDP. Next we present an interior point algorithm which converges to the optimal solution in polynomial time. The approach is a direct extension of Ye's projective method for linear programming. We also argue that most known interior point methods for linear programs can be transformed in a
Mixtures of Probabilistic Principal Component Analysers
, 1998
"... Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a com ..."
Abstract

Cited by 532 (6 self)
 Add to MetaCart
maximumlikelihood framework, based on a specific form of Gaussian latent variable model. This leads to a welldefined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context
Spacetime block codes from orthogonal designs
 IEEE Trans. Inform. Theory
, 1999
"... Abstract — We introduce space–time block coding, a new paradigm for communication over Rayleigh fading channels using multiple transmit antennas. Data is encoded using a space–time block code and the encoded data is split into � streams which are simultaneously transmitted using � transmit antennas. ..."
Abstract

Cited by 1524 (42 self)
 Add to MetaCart
of the space–time block code and gives a maximumlikelihood decoding algorithm which is based only on linear processing at the receiver. Space–time block codes are designed to achieve the maximum diversity order for a given number of transmit and receive antennas subject to the constraint of having a simple
Results 1  10
of
4,184