Multiple structure-view learning for graph classification

Jia Wu, Shirui Pan, Xingquan Zhu, Chengqi Zhang, Philip S. Yu

Research output: Contribution to journalArticleResearchpeer-review

49 Citations (Scopus)


Many applications involve objects containing structure and rich content information, each describing different feature aspects of the object. Graph learning and classification is a common tool for handling such objects. To date, existing graph classification has been limited to the single-graph setting with each object being represented as one graph from a single structure-view. This inherently limits its use to the classification of complicated objects containing complex structures and uncertain labels. In this paper, we advance graph classification to handle multigraph learning for complicated objects from multiple structure views, where each object is represented as a bag containing several graphs and the label is only available for each graph bag but not individual graphs inside the bag. To learn such graph classification models, we propose a multistructure-view bag constrained learning (MSVBL) algorithm, which aims to explore substructure features across multiple structure views for learning. By enabling joint regularization across multiple structure views and enforcing labeling constraints at the bag and graph levels, MSVBL is able to discover the most effective substructure features across all structure views. Experiments and comparisons on real-world data sets validate and demonstrate the superior performance of MSVBL in representing complicated objects as multigraph for classification, e.g., MSVBL outperforms the state-of-the-art multiview graph classification and multiview multi-instance learning approaches.

Original languageEnglish
Pages (from-to)3236-3251
Number of pages16
JournalIEEE Transactions on Neural Networks and Learning Systems
Issue number7
Publication statusPublished - Jul 2018
Externally publishedYes


  • Graph
  • graph classification
  • multiview learning
  • subgraph mining

Cite this