Simple or complex? Complexity-controllable question generation with soft templates and deep mixture of experts model

Sheng Bi, Xiya Cheng, Yuan-Fang Li, Lizhen Qu, Shirong Shen, Guilin Qi, Lu Pan, Yinlin Jiang

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

5 Citations (Scopus)


The ability to generate natural-language questions with controlled complexity levels is highly desirable as it further expands the applicability of question generation. In this paper, we propose an end-to-end neural complexitycontrollable question generation model, which incorporates a mixture of experts (MoE) as the selector of soft templates to improve the accuracy of complexity control and the quality of generated questions. The soft templates capture question similarity while avoiding the expensive construction of actual templates. Our method introduces a novel, cross-domain complexity estimator to assess the complexity of a question, taking into account the passage, the question, the answer and their interactions. The experimental results on two benchmark QA datasets demonstrate that our QG model is superior to state-of-the-art methods in both automatic and manual evaluation. Moreover, our complexity estimator is significantly more accurate than the baselines in both in-domain and out-domain settings.

Original languageEnglish
Title of host publicationFindings of ACL: EMNLP 2021
EditorsXuanjing Huang, Lucia Specia, Scott Wen-Tau Yih
Place of PublicationStroudsburg PA USA
PublisherAssociation for Computational Linguistics (ACL)
Number of pages10
ISBN (Electronic)9781955917100
Publication statusPublished - 2021
EventEmpirical Methods in Natural Language Processing 2021 - Online, Punta Cana, Dominican Republic
Duration: 7 Nov 202111 Nov 2021 (Website) (Proceedings) (Proceedings - findings)


ConferenceEmpirical Methods in Natural Language Processing 2021
Abbreviated titleEMNLP 2021
Country/TerritoryDominican Republic
CityPunta Cana
Internet address

Cite this