Automatic generation of grounded visual questions

Shijie Zhang, Lizhen Qu, Shaodi You, Zhenglu Yang, Jiawan Zhang

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

49 Citations (Scopus)

Abstract

In this paper, we propose the first model to be able to generate visually grounded questions with diverse types for a single image. Visual question generation is an emerging topic which aims to ask questions in natural language based on visual input. To the best of our knowledge, it lacks automatic methods to generate meaningful questions with various types for the same visual input. To circumvent the problem, we propose a model that automatically generates visually grounded questions with varying types. Our model takes as input both images and the captions generated by a dense caption model, samples the most probable question types, and generates the questions in sequel. The experimental results on two real world datasets show that our model outperforms the strongest baseline in terms of both correctness and diversity with a wide margin.

Original languageEnglish
Title of host publicationProceedings of the 26th International Joint Conference on Artificial Intelligence
EditorsCarles Sierra
Place of PublicationMarina del Rey CA USA
PublisherAssociation for the Advancement of Artificial Intelligence (AAAI)
Pages4235-4243
Number of pages9
ISBN (Electronic)9780999241103
ISBN (Print)9780999241110
DOIs
Publication statusPublished - 2017
Externally publishedYes
EventInternational Joint Conference on Artificial Intelligence 2017 - Melbourne, Australia
Duration: 19 Aug 201725 Aug 2017
Conference number: 26th
https://ijcai-17.org/
https://www.ijcai.org/Proceedings/2017/ (Proceedings)

Conference

ConferenceInternational Joint Conference on Artificial Intelligence 2017
Abbreviated titleIJCAI 2017
Country/TerritoryAustralia
CityMelbourne
Period19/08/1725/08/17
Internet address

Cite this