MCGNet+: an improved motor imagery classification based on cosine similarity

Yan Li, Ning Zhong, David Taniar, Haolan Zhang

Research output: Contribution to journalArticleResearchpeer-review

12 Citations (Scopus)

Abstract

It has been a challenge for solving the motor imagery classification problem in the brain informatics area. Accuracy and efficiency are the major obstacles for motor imagery analysis in the past decades since the computational capability and algorithmic availability cannot satisfy complex brain signal analysis. In recent years, the rapid development of machine learning (ML) methods has empowered people to tackle the motor imagery classification problem with more efficient methods. Among various ML methods, the Graph neural networks (GNNs) method has shown its efficiency and accuracy in dealing with inter-related complex networks. The use of GNN provides new possibilities for feature extraction from brain structure connection. In this paper, we proposed a new model called MCGNet+, which improves the performance of our previous model MutualGraphNet. In this latest model, the mutual information of the input columns forms the initial adjacency matrix for the cosine similarity calculation between columns to generate a new adjacency matrix in each iteration. The dynamic adjacency matrix combined with the spatial temporal graph convolution network (ST-GCN) has better performance than the unchanged matrix model. The experimental results indicate that MCGNet+ is robust enough to learn the interpretable features and outperforms the current state-of-the-art methods.

Original languageEnglish
Article number3
Number of pages11
JournalBrain Informatics
Volume9
Issue number1
DOIs
Publication statusPublished - 1 Feb 2022

Keywords

  • Brain–computer interfaces (BCI)
  • Electroencephalography (EEG)
  • Graph convolutional networks

Cite this