Synthesizing speech test cases with Text-to-Speech? An empirical study on the false alarms in automated speech recognition testing

Julia Kaiwen Lau, Kelvin Kai Wen Kong, Julian Hao Yong, Per Hoong Tan, Zhou Yang, Zi Qian Yong, Joshua Chern Wey Low, Chun Yong Chong, Mei Kuan Lim, David Lo

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

Abstract

Recent studies have proposed the use of Text-To-Speech (TTS) systems to automatically synthesise speech test cases on a scale and uncover a large number of failures in ASR systems. However, the failures uncovered by synthetic test cases may not reflect the actual performance of an ASR system when it transcribes human audio, which we refer to as false alarms. Given a failed test case synthesised from TTS systems, which consists of TTS-generated audio and the corresponding ground truth text, we feed the human audio stating the same text to an ASR system. If human audio can be correctly transcribed, an instance of a false alarm is detected. In this study, we investigate false alarm occurrences in five popular ASR systems using synthetic audio generated from four TTS systems and human audio obtained from two commonly used datasets. Our results show that the least number of false alarms is identified when testing Deepspeech, and the number of false alarms is the highest when testing Wav2vec2. On average, false alarm rates range from 21% to 34% in all five ASR systems. Among the TTS systems used, Google TTS produces the least number of false alarms (17%), and Espeak TTS produces the highest number of false alarms (32%) among the four TTS systems. Additionally, we build a false alarm estimator that flags potential false alarms, which achieves promising results: a precision of 98.3%, a recall of 96.4%, an accuracy of 98.5%, and an F1 score of 97.3%. Our study provides insight into the appropriate selection of TTS systems to generate high-quality speech to test ASR systems. Additionally, a false alarm estimator can be a way to minimise the impact of false alarms and help developers choose suitable test inputs when evaluating ASR systems. The source code used in this paper is publicly available on GitHub at https://github.com/julianyonghao/FAinASRtest.

Original languageEnglish
Title of host publicationProceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis
EditorsRene Just, Gordon Fraser
Place of PublicationNew York NY USA
PublisherAssociation for Computing Machinery (ACM)
Pages1169-1181
Number of pages13
ISBN (Electronic)9798400702211
DOIs
Publication statusPublished - 12 Jul 2023
EventInternational Symposium on Software Testing and Analysis 2023 - Seattle, United States of America
Duration: 17 Jul 202321 Jul 2023
Conference number: 32nd
https://dl.acm.org/doi/proceedings/10.1145/3597926 (Proceedings)
https://conf.researchr.org/home/issta-2023 (Website)

Conference

ConferenceInternational Symposium on Software Testing and Analysis 2023
Abbreviated titleISSTA 2023
Country/TerritoryUnited States of America
CitySeattle
Period17/07/2321/07/23
Internet address

Keywords

  • Automated Speech Recognition
  • False Alarms
  • Software Testing

Cite this