Results 1 
6 of
6
Variational bayes logistic regression as regularized fusion for NIST sre 2010
 in Proc. Odyssey: the Speaker and Language Recognition Workshop
, 2012
"... Fusion of the base classifiers is seen as a way to achieve high performance in stateoftheart speaker verification systems. Typically, we are looking for base classifiers that would be complementary. We might also be interested in reinforcing good base classifiers by including others that are sim ..."
Abstract

Cited by 10 (8 self)
 Add to MetaCart
(Show Context)
Fusion of the base classifiers is seen as a way to achieve high performance in stateoftheart speaker verification systems. Typically, we are looking for base classifiers that would be complementary. We might also be interested in reinforcing good base classifiers by including others that are similar to them. In any case, the final ensemble size is typically small and has to be formed based on some rules of thumb. We are interested to find out a subset of classifiers that has a good generalization performance. We approach the problem from sparse learning point of view. We assume that the true, but unknown, fusion weights are sparse. As a practical solution, we regularize weighted logistic regression loss function by elasticnet and LASSO constraints. However, all regularization methods have an additional parameter that controls the amount of regularization employed. This needs to be separately tuned. In this work, we use variational Bayes approach to automatically obtain sparse solutions without additional crossvalidation. Variational Bayes method improves the baseline method in 3 out of 4 subconditions. Index Terms: logistic regression, regularization, compressed sensing, linear fusion, speaker verification
Sparse classifier fusion for speaker verification
 IEEE Transactions on Audio, Speech and Language Processing
, 2013
"... Abstract—Stateoftheart speaker verification systems take advantage of a number of complementary base classifiers by fusing them to arrive at reliable verification decisions. In speaker verification, fusion is typically implemented as a weighted linear combination of the base classifier scores, wh ..."
Abstract

Cited by 9 (9 self)
 Add to MetaCart
(Show Context)
Abstract—Stateoftheart speaker verification systems take advantage of a number of complementary base classifiers by fusing them to arrive at reliable verification decisions. In speaker verification, fusion is typically implemented as a weighted linear combination of the base classifier scores, where the combination weights are estimated using a logistic regression model. An alternative way for fusion is to use classifier ensemble selection, which can be seen as sparse regularization applied to logistic regression. Even though score fusion has been extensively studied in speaker verification, classifier ensemble selection is much less studied. In this study, we extensively study a sparse classifier fusion on a collection of twelve I4U spectral subsystems on the NIST 2008 and 2010 speaker recognition evaluation (SRE) corpora. Index Terms—Classifier ensemble selection, experimentation, linear fusion, speaker verification. I.
Regularized Logistic Regression Fusion for Speaker Verification
"... Fusion of the base classifiers is seen as the way to achieve stateofthe art performance in the speaker verfication systems. Standard approach is to pose the fusion problem as the linear binary classification task. Most successful loss function in speaker verification fusion has been the weighted lo ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
(Show Context)
Fusion of the base classifiers is seen as the way to achieve stateofthe art performance in the speaker verfication systems. Standard approach is to pose the fusion problem as the linear binary classification task. Most successful loss function in speaker verification fusion has been the weighted logistic regression popularized by the FoCal toolkit. However, it is known that optimizing logistic regression can overfit severely without appropriate regularization. In addition, subset classifier selection can be achieved by using an external 0/1 loss function on the best subset. In this work, we propose to use LASSO based regularization on the FoCal cost function to achive improved performance and classifier subset selection method integrated into one optimization task. Proposed method is able to achieve 51 % relative improvement in Actual DCF over the FoCal baseline. Index Terms: logistic regression, regularization, compressed sensing, linear fusion, speaker verification
Classifier subset selection and fusion for speaker verification
 in ICASSP 2011
"... Stateoftheart speaker verification systems consists of a number of complementary subsystems whose outputs are fused, to arrive at more accurate and reliable verification decision. In speaker verification, fusion is typically implemented as a linear combination of the subsystem scores. Parameters ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
(Show Context)
Stateoftheart speaker verification systems consists of a number of complementary subsystems whose outputs are fused, to arrive at more accurate and reliable verification decision. In speaker verification, fusion is typically implemented as a linear combination of the subsystem scores. Parameters of the linear model are commonly estimated using the logistic regression method, as implemented in the popular FoCal toolkit. In this paper, we study simultaneous use of classifier selection and fusion. We study four alternative fusion strategies, three score warping techniques, and provide interesting experimental bounds on optimal classifier subset selection. Detailed experiments are carried out on the NIST 2008 and 2010 SRE corpora. Index Terms — Classifier selection, linear fusion 1.
Experiments with large scale regularized fusion on NIST SRE 2010
"... Fusion of the base classifiers is seen as the way to achieve high performance in stateoftheart speaker verification systems. Typically, we are looking for base classifiers that would be complementary. We might also be interested in reinforcing good base classifiers by including others that are s ..."
Abstract
 Add to MetaCart
(Show Context)
Fusion of the base classifiers is seen as the way to achieve high performance in stateoftheart speaker verification systems. Typically, we are looking for base classifiers that would be complementary. We might also be interested in reinforcing good base classifiers by including others that are similar to it. In any case, the final ensemble size is typically small and has to be formed based on some rules of thumb. In this paper, we create a very large ensemble, consisting of I4U and LIA submissions with additional stateoftheart ivector system. In total, we have 17 base classifiers in our ensemble. We are interested to find out the subset of classifiers that has a good generalization performance. We approach the problem from the sparse learning point of view. We assume that the true, but unknown, fusion weights are actually sparse. As a practical solution we regularize the weighted logistic regression loss function by the ElasticNet constraint. Though sparse solutions can be easily obtained using the socalled least absolute shrinkage and selection operator (LASSO), but it does not take into account high correlation between classifiers. ElasticNet, on the other hand, is a compromise between LASSO and ridge regression constraints. While ridge regression cannot produce sparse solutions, ElasticNet can. By using sparseness enforcing constraint we are able to improve over the unregularized solution in all but teltel condition. Index Terms: logistic regression, regularization, compressed sensing, linear fusion, speaker verification
MANUSCRIPT, IEEE TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING 1 Sparse Classifier Fusion for Speaker Verification
"... Abstract—Stateoftheart speaker verification systems take advantage of a number of complementary base classifiers by fusing them to arrive at reliable verification decisions. In speaker verification, fusion is typically implemented as a weighted linear combination of the base classifier scores, wh ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Stateoftheart speaker verification systems take advantage of a number of complementary base classifiers by fusing them to arrive at reliable verification decisions. In speaker verification, fusion is typically implemented as a weighted linear combination of the base classifier scores, where the combination weights are estimated using a logistic regression model. An alternative way for fusion is to use classifier ensemble selection, which can be seen as sparse regularization applied to logistic regression. Even though score fusion has been extensively studied in speaker verification, classifier ensemble selection is much less studied. In this study, we extensively study a sparse classifier fusion on a collection of twelve I4U spectral subsystems on the NIST 2008 and 2010 speaker recognition evaluation (SRE) corpora. Index Terms—Classifier ensemble selection, linear fusion, speaker verification, experimentation I.