Look, imagine and match: improving textual-visual cross-modal retrieval with generative models

Jiuxiang Gu, Jianfei Cai, Shafiq Joty, Li Niu, Gang Wang

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

255 Citations (Scopus)

Abstract

Textual-visual cross-modal retrieval has been a hot research topic in both computer vision and natural language processing communities. Learning appropriate representations for multi-modal data is crucial for the cross-modal retrieval performance. Unlike existing image-text retrieval approaches that embed image-text pairs as single feature vectors in a common representational space, we propose to incorporate generative processes into the cross-modal feature embedding, through which we are able to learn not only the global abstract features but also the local grounded features. Extensive experiments show that our framework can well match images and sentences with complex content, and achieve the state-of-the-art cross-modal retrieval results on MSCOCO dataset.

Original languageEnglish
Title of host publicationProceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018
EditorsDavid Forsyth, Ivan Laptev, Aude Oliva, Deva Ramanan
Place of PublicationPiscataway NJ USA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages7181-7189
Number of pages9
ISBN (Electronic)9781538664209
ISBN (Print)9781538664216
DOIs
Publication statusPublished - 2018
Externally publishedYes
EventIEEE Conference on Computer Vision and Pattern Recognition 2018 - Salt Lake City, United States of America
Duration: 19 Jun 201821 Jun 2018
http://cvpr2018.thecvf.com/
https://ieeexplore.ieee.org/xpl/conhome/8576498/proceeding (Proceedings)

Conference

ConferenceIEEE Conference on Computer Vision and Pattern Recognition 2018
Abbreviated titleCVPR 2018
Country/TerritoryUnited States of America
CitySalt Lake City
Period19/06/1821/06/18
Internet address

Cite this