Neural speech translation using lattice transformations and graph networks

Daniel Beck, Trevor Cohn, Gholamreza Haffari

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearch

2 Citations (Scopus)


Speech translation systems usually follow a pipeline approach, using word lattices as an intermediate representation. However, previous work assume access to the original transcriptions used to train the ASR system, which can limit applicability in real scenarios. In this work we propose an approach for speech translation through lattice transformations and neural models based on graph networks. Experimental results show that our approach reaches competitive performance without relying on transcriptions, while also being orders of magnitude faster than previous work.

Original languageEnglish
Title of host publicationEMNLP-IJCNLP 2019 - Graph-Based Methods for Natural Language Processing - Proceedings of the Thirteenth Workshop
EditorsDmitry Ustalov, Swapna Somasundaran, Peter Jansen, Goran Glavaš, Martin Riedl, Mihai Surdeanu, Michalis Vazirgiannis
Place of PublicationStroudsburg PA USA
PublisherAssociation for Computational Linguistics (ACL)
Number of pages6
ISBN (Electronic)9781950737864
Publication statusPublished - 2019
EventWorkshop on Graph-Based Methods for Natural Language Processing 2019 - Hong Kong, Hong Kong
Duration: 4 Nov 20194 Nov 2019
Conference number: 13th


ConferenceWorkshop on Graph-Based Methods for Natural Language Processing 2019
Abbreviated titleTextGraphs 2019
Country/TerritoryHong Kong
CityHong Kong
Internet address

Cite this