Putting the horse before the cart: a generator-evaluator framework for question generation from text

Vishwajeet Satish Kumar, Ganesh Ramakrishnan, Yuan-Fang Li

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

1 Citation (Scopus)


Automatic question generation (QG) is a useful yet challenging task in NLP. Recent neural network-based approaches represent the state-of-the-art in this task. In this work, we attempt to strengthen them significantly by adopting a holistic and novel generator-evaluator framework that directly optimizes objectives that reward semantics and structure. The generator is a sequence-to-sequence model that incorporates the structure and semantics of the question being generated. The generator predicts an answer in the passage that the question can pivot on. Employing the copy and coverage mechanisms, it also acknowledges other contextually important (and possibly rare) keywords in the passage that the question needs to conform to, while not redundantly repeating words. The evaluator model evaluates and assigns a reward to each predicted question based on its conformity to the structure of ground-truth questions. We propose two novel QG-specific reward functions for text conformity and answer conformity of the generated question. The evaluator also employs structure-sensitive rewards based on evaluation measures such as BLEU, GLEU, and ROUGE-L, which are suitable for QG. In contrast, most of the previous works only optimize the cross-entropy loss, which can induce inconsistencies between training (objective) and testing (evaluation) measures. Our evaluation shows that our approach significantly outperforms state-of-the-art systems on the widely-used SQuAD benchmark as per both automatic and human evaluation.

Original languageEnglish
Title of host publicationCoNLL 2019, The 23rd Conference on Computational Natural Language Learning
Subtitle of host publicationProceedings of the Conference
EditorsSebastian Ruder, Miikka Silfverberg
Place of PublicationStroudsburg PA USA
PublisherAssociation for Computational Linguistics (ACL)
Number of pages10
ISBN (Electronic)9781950737727
Publication statusPublished - 2019
EventConference on Natural Language Learning 2019 - Hong Kong, China
Duration: 3 Nov 20194 Nov 2019
Conference number: 23rd
https://www.aclweb.org/anthology/volumes/K19-1/ (Proceedings)

Publication series

NameCoNLL 2019 - 23rd Conference on Computational Natural Language Learning, Proceedings of the Conference


ConferenceConference on Natural Language Learning 2019
Abbreviated titleCoNLL 2019
CityHong Kong
Internet address

Cite this