Abstract
Textual-visual cross-modal retrieval has been a hot research topic in both computer vision and natural language processing communities. Learning appropriate representations for multi-modal data is crucial for the cross-modal retrieval performance. Unlike existing image-text retrieval approaches that embed image-text pairs as single feature vectors in a common representational space, we propose to incorporate generative processes into the cross-modal feature embedding, through which we are able to learn not only the global abstract features but also the local grounded features. Extensive experiments show that our framework can well match images and sentences with complex content, and achieve the state-of-the-art cross-modal retrieval results on MSCOCO dataset.
Original language | English |
---|---|
Title of host publication | Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018 |
Editors | David Forsyth, Ivan Laptev, Aude Oliva, Deva Ramanan |
Place of Publication | Piscataway NJ USA |
Publisher | IEEE, Institute of Electrical and Electronics Engineers |
Pages | 7181-7189 |
Number of pages | 9 |
ISBN (Electronic) | 9781538664209 |
ISBN (Print) | 9781538664216 |
DOIs | |
Publication status | Published - 2018 |
Externally published | Yes |
Event | IEEE Conference on Computer Vision and Pattern Recognition 2018 - Salt Lake City, United States of America Duration: 19 Jun 2018 → 21 Jun 2018 http://cvpr2018.thecvf.com/ https://ieeexplore.ieee.org/xpl/conhome/8576498/proceeding (Proceedings) |
Conference
Conference | IEEE Conference on Computer Vision and Pattern Recognition 2018 |
---|---|
Abbreviated title | CVPR 2018 |
Country/Territory | United States of America |
City | Salt Lake City |
Period | 19/06/18 → 21/06/18 |
Internet address |