Speech technology for unwritten languages

Odette Scharenborg, Laurent Besacier, Alan Black, Mark Hasegawa-Johnson, Florian Metze, Graham Neubig, Sebastian Stüker, Pierre Godard, Markus Müller, Lucas Ondel, Shruti Palaskar, Philip Arthur, Francesco Ciannella, Mingxing Du, Elin Larsen, Danny Merkx, Rachid Riad, Liming Wang, Emmanuel Dupoux

Research output: Contribution to journalArticleResearchpeer-review

2 Citations (Scopus)

Abstract

Speech technology plays an important role in our everyday life. Among others, speech is used for human-computer interaction, for instance for information retrieval and on-line shopping. In the case of an unwritten language, however, speech technology is unfortunately difficult to create, because it cannot be created by the standard combination of pre-trained speech-to-text and text-to-speech subsystems. The research presented in this article takes the first steps towards speech technology for unwritten languages. Specifically, the aim of this work was 1) to learn speech-to-meaning representations without using text as an intermediate representation, and 2) to test the sufficiency of the learned representations to regenerate speech or translated text, or to retrieve images that depict the meaning of an utterance in an unwritten language. The results suggest that building systems that go directly from speech-to-meaning and from meaning-to-speech, bypassing the need for text, is possible.

Original languageEnglish
Pages (from-to)964-975
Number of pages12
JournalIEEE/ACM Transactions on Audio, Speech, and Language Processing
Volume28
DOIs
Publication statusPublished - 13 Feb 2020
Externally publishedYes

Keywords

  • automatic speech recognition
  • image retrieval
  • Speech processing
  • speech synthesis
  • unsupervised learning

Cite this