Prototype-guided memory replay for continual learning

Stella Ho, Ming Liu, Lan Du, Longxiang Gao, Yong Xiang

Research output: Contribution to journalArticleResearchpeer-review

18 Citations (Scopus)

Abstract

Continual learning (CL) is a machine learning paradigm that accumulates knowledge while learning sequentially. The main challenge in CL is catastrophic forgetting of previously seen tasks, which occurs due to shifts in the probability distribution. To retain knowledge, existing CL models often save some past examples and revisit them while learning new tasks. As a result, the size of saved samples dramatically increases as more samples are seen. To address this issue, we introduce an efficient CL method by storing only a few samples to achieve good performance. Specifically, we propose a dynamic prototype-guided memory replay (PMR) module, where synthetic prototypes serve as knowledge representations and guide the sample selection for memory replay. This module is integrated into an online meta-learning (OML) model for efficient knowledge transfer. We conduct extensive experiments on the CL benchmark text classification datasets and examine the effect of training set order on the performance of CL models. The experimental results demonstrate the superiority our approach in terms of accuracy and efficiency.

Original languageEnglish
Pages (from-to)10973-10983
Number of pages11
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume35
Issue number8
DOIs
Publication statusPublished - Aug 2024

Keywords

  • Adaptation models
  • Class-incremental learning (CIL)
  • continual learning (CL)
  • Data models
  • Memory management
  • online meta-learning (OML)
  • Prototypes
  • prototypical network
  • Task analysis
  • Text categorization
  • text classification
  • Training

Cite this