Investigating pre-trained audio encoders in the low-resource condition

Hao Yang, Jinming Zhao, Reza Haffari, Ehsan Shareghi

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

1 Citation (Scopus)

Abstract

Pre-trained speech encoders have been central to pushing state-of-the-art results across various speech understanding and generation tasks. Nonetheless, the capabilities of these encoders in low-resource settings are yet to be thoroughly explored. To address this, we conduct a comprehensive set of experiments using a representative set of 3 state-of-the-art encoders (Wav2vec2, WavLM, Whisper) in the low-resource setting across 7 speech understanding and generation tasks. We provide various quantitative and qualitative analyses on task performance, convergence speed, and representational properties of the encoders. We observe a connection between the pre-training protocols of these encoders and the way in which they capture information in their internal layers. In particular, we observe the Whisper encoder exhibits the greatest low-resource capabilities on content-driven tasks in terms of performance and convergence speed.

Original languageEnglish
Title of host publicationProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH 2023
EditorsNaomi Harte, Julie Berndsen, Gareth Jones
Place of PublicationDublin Ireland
PublisherInternational Speech Communication Association (ISCA)
Pages1498-1502
Number of pages5
DOIs
Publication statusPublished - 2023
EventAnnual Conference of the International Speech Communication Association 2023 - Dublin, Ireland
Duration: 20 Aug 202324 Aug 2023
Conference number: 24th
https://interspeech2023.org/ (Website)
https://www.isca-speech.org/archive/interspeech_2023/index.html (Proceedings)

Conference

ConferenceAnnual Conference of the International Speech Communication Association 2023
Abbreviated titleInterspeech 2023
Country/TerritoryIreland
CityDublin
Period20/08/2324/08/23
Internet address

Keywords

  • low-resource setting
  • speech encoders
  • speech understanding

Cite this