Multi-level view associative convolution network for view-based 3D model retrieval

Zan Gao, Yan Zhang, Hua Zhang, Weili Guan, Dong Feng, Shengyong Chen

Research output: Contribution to journalArticleResearchpeer-review

20 Citations (Scopus)

Abstract

With the continuous improvement of image processing capabilities, a three-dimensional (3D) model that can contain rich information is becoming the fourth type of multimedia data (in addition to sound, image, and video). Moreover, since there is a wide range of applications of 3D models, how to quickly and effectively obtain the correct target model from the massive data has become a key issue. To date, 3D model retrieval approaches have been proposed, and in these approaches, view-based 3D model retrieval methods can achieve satisfactory performance. In the 3D model retrieval task, the latent relationship mining of all images in a 3D model, the adaptive fusion of different images, and the discriminative feature extraction are the main challenges, but in most existing solutions, these issues are separately performed and they are not explored in an end-to-end network architecture. To solve these issues, in this work, we propose a novel and effective multi-level view associative convolution network (MLVACN) to realize view-based 3D model retrieval, where the relationship exploration of multiple-view images, the fusion of different images, and the feature discrimination learning are realized in a unified end-to-end framework. Specifically, we design the group association layer and the block association layer to study the latent relationships among different views from the view-level and the block-level, respectively. Moreover, the weight fusion layer is further designed to adaptively fuse different views in a 3D model. In addition, these three layers are embedded into the MLVACN. Finally, the pairwise discrimination loss function is proposed to learn the discriminative features of the 3D model. Extensive experimental results on three 3D model retrieval datasets including ModelNet40, ModelNet10, and ShapeNetCore55 demonstrate that MLVACN can outperform state-of-the-art methods in term of mAP. When the ModelNet40 dataset is used, the mAP of MLVACN is improved by 13.25%, 7.75%, 3.95%, and 0.61% as compared to those of the MVCNN, GVCNN, PVNet, and MLVCNN methods, respectively.

Original languageEnglish
Pages (from-to)2264-2278
Number of pages15
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume32
Issue number4
DOIs
Publication statusPublished - Apr 2022

Keywords

  • adaptive weight fusion
  • block association layer
  • group association layer
  • multi-level
  • pairwise discrimination loss
  • View-based 3D model retrieval

Cite this