Results 1 -
5 of
5
ASVspoof 2015: the first automatic speaker verification spoofing and countermeasures challenge
- in INTERSPEECH
, 2015
"... An increasing number of independent studies have con-firmed the vulnerability of automatic speaker verification (ASV) technology to spoofing. However, in comparison to that involving other biometric modalities, spoofing and countermea-sure research for ASV is still in its infancy. A current barrier ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
An increasing number of independent studies have con-firmed the vulnerability of automatic speaker verification (ASV) technology to spoofing. However, in comparison to that involving other biometric modalities, spoofing and countermea-sure research for ASV is still in its infancy. A current barrier to progress is the lack of standards which impedes the comparison of results generated by different researchers. The ASVspoof ini-tiative aims to overcome this bottleneck through the provision of standard corpora, protocols and metrics to support a common evaluation. This paper introduces the first edition, summaries the results and discusses directions for future challenges and re-search.
A comparison of features for synthetic speech detection
- in INTERSPEECH
, 2015
"... The performance of biometric systems based on automatic speaker recognition technology is severely degraded due to spoofing attacks with synthetic speech generated using different voice conversion (VC) and speech synthesis (SS) techniques. Various countermeasures are proposed to detect this type of ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
The performance of biometric systems based on automatic speaker recognition technology is severely degraded due to spoofing attacks with synthetic speech generated using different voice conversion (VC) and speech synthesis (SS) techniques. Various countermeasures are proposed to detect this type of at-tack, and in this context, choosing an appropriate feature extrac-tion technique for capturing relevant information from speech is an important issue. This paper presents a concise experi-mental review of different features for synthetic speech detec-tion task. A wide variety of features considered in this study include previously investigated features as well as some other potentially useful features for characterizing real and synthetic speech. The experiments are conducted on recently released ASVspoof 2015 corpus containing speech data from a large number of VC and SS technique. Comparative results using two different classifiers indicate that features representing spectral information in high-frequency region, dynamic information of speech, and detailed information related to subband characteris-tics are considerably more useful in detecting synthetic speech. Index Terms: anti-spoofing, ASVspoof 2015, feature extrac-tion, countermeasures
Available online at www.sciencedirect.com
, 2015
"... technology is vulnerability of the recognizers to intentional circumvention (Wu et al., 2015). In the first case, authenti-cation, this refers to dedicated effort to manipulate one’s speech so that an ASV system would misclassify the attack-er’s sample to originate from the target (client). There ar ..."
Abstract
- Add to MetaCart
(Show Context)
technology is vulnerability of the recognizers to intentional circumvention (Wu et al., 2015). In the first case, authenti-cation, this refers to dedicated effort to manipulate one’s speech so that an ASV system would misclassify the attack-er’s sample to originate from the target (client). There are ⇑ Corresponding author.
Automatic versus Human Speaker Verification: The Case of Voice Mimicry
"... In this work, we compare the performance of three modern speaker verification systems and non-expert human listeners in the presence of voice mimicry. Our goal is to gain insights on how vulnerable speaker verification systems are to mimicry attack and compare it to the performance of human listener ..."
Abstract
- Add to MetaCart
(Show Context)
In this work, we compare the performance of three modern speaker verification systems and non-expert human listeners in the presence of voice mimicry. Our goal is to gain insights on how vulnerable speaker verification systems are to mimicry attack and compare it to the performance of human listeners. We study both traditional Gaussian mixture model-universal background model (GMM-UBM) and an i-vector based classifier with cosine scoring and probabilistic linear discriminant analysis (PLDA) scoring. For the studied material in Finnish language, the mimicry attack decreased lightly the equal error rate (EER) for GMM-UBM from 10.83 to 10.31, while for i-vector systems the EER increased from 6.80 to 13.76 and from 4.36 to 7.38. The performance of the human listening panel shows that imitated speech increases the difficulty of the speaker verification task. It is even more difficult to recognize a person who is intentionally concealing his or her identity. For Impersonator A, the average listener made 8 errors from 34 trials while the automatic systems had 6 errors in the same set. The average listener for Impersonator B made 7 errors from the 28 trials, while the automatic systems made 7 to 9 errors. A statistical analysis of the listener performance was also conducted. We found out a statistically significant association, with p = 0.00019 and R2 = 0.59, between listener accuracy and self reported factors only when familiar voices were present in the test.
Spoofing and countermeasures for speaker verification: a survey
"... While biometric authentication has advanced significantly in recent years, evidence shows the technology can be susceptible to malicious spoofing attacks. The research community has responded with dedicated countermeasures which aim to detect and deflect such attacks. Even if the literature shows th ..."
Abstract
- Add to MetaCart
(Show Context)
While biometric authentication has advanced significantly in recent years, evidence shows the technology can be susceptible to malicious spoofing attacks. The research community has responded with dedicated countermeasures which aim to detect and deflect such attacks. Even if the literature shows that they can be effective, the problem is far from being solved; biometric systems remain vulnerable to spoofing. Despite a growing momentum to develop spoofing countermeasures for automatic speaker verification, now that the technology has matured sufficiently to support mass deployment in an array of diverse applications, greater effort will be needed in the future to ensure adequate protection against spoofing. This article provides a survey of past work and identifies priority research directions for the future. We summarise previous studies involving impersonation, replay, speech synthesis and voice conversion spoofing attacks and more recent efforts to develop dedicated countermeasures. The survey shows that future research should address the lack of standard datasets and the over-fitting of existing countermeasures to specific, known spoofing attacks.