Auto-encoding and distilling scene graphs for image captioning

Xu Yang, Hanwang Zhang, Jianfei Cai

Research output: Contribution to journalArticleResearchpeer-review

29 Citations (Scopus)

Abstract

We propose scene graph auto-encoder (SGAE) that incorporates the language inductive bias into the encoder-decoder image captioning framework for more human-like captions. Intuitively, we humans use the inductive bias to compose collocations and contextual inferences in discourse. For example, when we see the relation “a person on a bike”, it is natural to replace “on” with “ride” and infer “a person riding a bike on a road” even when the “road” is not evident. Therefore, exploiting such bias as a language prior is expected to help the conventional encoder-decoder models reason as we humans and generate more descriptive captions. Specifically, we use the scene graph—a directed graph ( $\mathcal {G}$ ) where an object node is connected by adjective nodes and relationship nodes—to represent the complex structural layout of both image ( $\mathcal {I}$ ) and sentence ( $\mathcal {S}$ ). In the language domain, we use SGAE to learn a dictionary set ( $\mathcal {D}$ ) that helps reconstruct sentences in the $\mathcal {S}\rightarrow \mathcal {G}_{\mathcal {S}} \rightarrow \mathcal {D} \rightarrow \mathcal {S}$ auto-encoding pipeline, where $\mathcal {D}$ encodes the desired language prior and the decoder learns to caption from such a prior; in the vision-language domain, we share $\mathcal {D}$ in the $\mathcal {I}\rightarrow \mathcal {G}_{\mathcal {I}} \rightarrow \mathcal {D} \rightarrow \mathcal {S}$ pipeline and distill the knowledge of the language decoder of the auto-encoder to that of the encoder-decoder based image captioner to transfer the language inductive bias. In this way, the shared $\mathcal {D}$ provides hidden embeddings about descriptive collocations to the encoder-decoder and the distillation strategy teaches the encoder-decoder to transform these embeddings to human-like captions as the auto-encoder. Thanks to the scene graph representation, the shared dictionary set, and the Knowledge Distillation strategy, the inductive bias is transferred across domains in principle. We validate the effectiveness of SGAE on the challenging MS-COCO image captioning benchmark, where our SGAE-based single-model achieves a new state-of-the-art 129.6 CIDEr-D on the Karpathy split, and a competitive 126.6 CIDEr-D (c40) on the official server, which is even comparable to other ensemble models. Furthermore, we validate the transferability of SGAE on two more challenging settings: transferring inductive bias from other language corpora and unpaired image captioning. Once again, the results of both settings confirm the superiority of SGAE. The code is released in https://github.com/yangxuntu/SGAE .

Original languageEnglish
Pages (from-to)2313-2327
Number of pages15
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume44
Issue number5
DOIs
Publication statusPublished - 1 May 2022

Keywords

  • Decoding
  • Dictionaries
  • Image Captioning
  • Knowledge Distillation
  • Memory Network
  • Pipelines
  • Roads
  • Scene Graph
  • Semantics
  • Training
  • Transfer Learning
  • Visualization

Cite this