Learning to Collocate Visual-Linguistic Neural Modules for image captioning

Xu Yang, Hanwang Zhang, Chongyang Gao, Jianfei Cai

Research output: Contribution to journalArticleResearchpeer-review

4 Citations (Scopus)

Abstract

Humans tend to decompose a sentence into different parts like sth do sth at someplace and then fill each part with certain content. Inspired by this, we follow the principle of modular design to propose a novel image captioner: learning to Collocate Visual-Linguistic Neural Modules (CVLNM). Unlike the widely used neural module networks in VQA, where the language (i.e., question) is fully observable, the task of collocating visual-linguistic modules is more challenging. This is because the language is only partially observable, for which we need to dynamically collocate the modules during the process of image captioning. To sum up, we make the following technical contributions to design and train our CVLNM: (1) distinguishable module design—four modules in the encoder including one linguistic module for function words and three visual modules for different content words (i.e., noun, adjective, and verb) and another linguistic one in the decoder for commonsense reasoning, (2) a self-attention based module controller for robustifying the visual reasoning, (3) a part-of-speech based syntax loss imposed on the module controller for further regularizing the training of our CVLNM. Extensive experiments on the MS-COCO dataset show that our CVLNM is more effective, e.g., achieving a new state-of-the-art 129.5 CIDEr-D, and more robust, e.g., being less likely to overfit to dataset bias and suffering less when fewer training samples are available. Codes are available at https://github.com/GCYZSL/CVLMN.

Original languageEnglish
Pages (from-to)82–100
Number of pages19
JournalInternational Journal of Computer Vision
Volume131
DOIs
Publication statusPublished - Jan 2023

Keywords

  • Distinguishable neural modules
  • Image captioning
  • Soft module collocations

Cite this