Multimodal feature-wise co-attention method for visual question answering

Sheng Zhang, Min Chen, Jincai Chen, Fuhao Zou, Yuan-Fang Li, Ping Lu

Research output: Contribution to journalArticleResearchpeer-review

1 Citation (Scopus)

Abstract

VQA attracts lots of researchers in recent years. It could be potentially applied to the remote consultation of COVID-19. Attention mechanisms provide an effective way of utilizing visual and question information selectively in visual question and answering (VQA). The attention methods of existing VQA models generally focus on spatial dimension. In other words, the attention is modeled as spatial probabilities that re-weights the image region or word token features. However, feature-wise attention cannot be ignored, as image and question representations are organized in both spatial and feature-wise modes. Taking the question “What is the color of the woman's hair” for example, identifying the hair color attribute feature is as important as focusing on the hair region. In this paper, we propose a novel neural network module named “multimodal feature-wise attention module” (MulFA) to model the feature-wise attention. Extensive experiments show that MulFA is capable of filtering representations for feature refinement and leads to improved performance. By introducing MulFA modules, we construct an effective union feature-wise and spatial co-attention network (UFSCAN) model for VQA. Our evaluation on two large-scale VQA datasets, VQA 1.0 and VQA 2.0, shows that UFSCAN achieves performance competitive with state-of-the-art models.

Original languageEnglish
Number of pages10
JournalInformation Fusion
Volume73
DOIs
Publication statusPublished - Sep 2021

Keywords

  • Deep learning
  • Feature-wise attention learning
  • Multimodal feature fusion
  • Visual question answering (VQA)

Cite this