Deep neighbor-aware embedding for node clustering in attributed graphs

Chun Wang, Shirui Pan, Celina P. Yu, Ruiqi Hu, Guodong Long, Chengqi Zhang

Research output: Contribution to journalArticleResearchpeer-review

36 Citations (Scopus)


Node clustering aims to partition the vertices in a graph into multiple groups or communities. Existing studies have mostly focused on developing deep learning approaches to learn a latent representation of nodes, based on which simple clustering methods like k-means are applied. These two-step frameworks for node clustering are difficult to manipulate and usually lead to suboptimal performance, mainly because the graph embedding is not goal-directed, i.e., designed for the specific clustering task. In this paper, we propose a clustering-directed deep learning approach, Deep Neighbor-aware Embedded Node Clustering (DNENC for short) for clustering graph data. Our method focuses on attributed graphs to sufficiently explore the two sides of information in graphs. It encodes the topological structure and node content in a graph into a compact representation via a neighbor-aware graph autoencoder, which progressively absorbs information from neighbors via a convolutional or attentional encoder. Multiple neighbor-aware encoders are stacked to build a deep architecture followed by an inner-product decoder for reconstructing the graph structure. Furthermore, soft labels are generated to supervise a self-training process, which iteratively refines the node clustering results. The self-training process is jointly learned and optimized with the graph embedding in a unified framework, to benefit both components mutually. Experimental results compared with state-of-the-art algorithms demonstrate the good performance of our framework.

Original languageEnglish
Article number108230
Number of pages13
JournalPattern Recognition
Publication statusPublished - Feb 2022


  • Attributed graph
  • Graph attention network
  • Graph convolutional network
  • Network representation
  • Node clustering

Cite this