Neural data-to-text generation: An encoder-decoder structure with Multi-Candidate-based Context Module

Jing Ming Guo, Koksheik Wong, Bo Ruei Cheng, Chen Hung Chung

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearch

1 Citation (Scopus)

Abstract

The data-to-text generation task mainly uses the encoder-decoder architecture, in which the context module provides the information that the decoder wants to observe at the moment. However, there are multiple entities and elements in a single sentence. We conjecture that the architecture has room for improvement to be more suitable for the data-to-text generation task. This paper proposes the Multi-Candidate-based Context Module, using the concept of multiple candidates to simultaneously observe multiple entities and their records. The experiment confirms the effectiveness of our multi-candidate concept and the improvement over the state-of-the-art on the recently released Rotowire dataset.

Original languageEnglish
Title of host publication2022 International Symposium on Intelligent Signal Processing and Communication Systems, ISPACS 2022
EditorsHitoshi Kiya
Place of PublicationPiscataway NJ USA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Number of pages4
ISBN (Electronic)9798350332421
ISBN (Print)9798350332438
DOIs
Publication statusPublished - 2022
EventIEEE International Symposium on Intelligent Signal Processing and Communications Systems (ISPACS) 2022 - Penang, Malaysia
Duration: 22 Nov 202225 Nov 2022
https://ieeexplore.ieee.org/xpl/conhome/10082768/proceeding (Proceedings)
https://web.archive.org/web/20220925073530/https://www.ispacs2022.org/committee.html (Website)

Conference

ConferenceIEEE International Symposium on Intelligent Signal Processing and Communications Systems (ISPACS) 2022
Abbreviated titleISPACS 2022
Country/TerritoryMalaysia
CityPenang
Period22/11/2225/11/22
Internet address

Keywords

  • Data-to-Text Generation
  • Multi-Candidate-based Mechanism
  • Natural Language Generation
  • Supervised Learning

Cite this