On the impact of sample duplication in machine-learning-based Android malware detection

Yanjie Zhao, Li Li, Haoyu Wang, Haipeng Cai, Tegawendé F. Bissyandé, Jacques Klein, John Grundy

Research output: Contribution to journalArticleResearchpeer-review

59 Citations (Scopus)

Abstract

Malware detection at scale in the Android realm is often carried out using machine learning techniques. State-of-the-art approaches such as DREBIN and MaMaDroid are reported to yield high detection rates when assessed against well-known datasets. Unfortunately, such datasets may include a large portion of duplicated samples, which may bias recorded experimental results and insights. In this article, we perform extensive experiments to measure the performance gap that occurs when datasets are de-duplicated. Our experimental results reveal that duplication in published datasets has a limited impact on supervised malware classification models. This observation contrasts with the finding of Allamanis on the general case of machine learning bias for big code. Our experiments, however, show that sample duplication more substantially affects unsupervised learning models (e.g., malware family clustering). Nevertheless, we argue that our fellow researchers and practitioners should always take sample duplication into consideration when performing machine-learning-based (via either supervised or unsupervised learning) Android malware detections, no matter how significant the impact might be.

Original languageEnglish
Article number40
Number of pages38
JournalACM Transactions on Software Engineering and Methodology
Volume30
Issue number3
DOIs
Publication statusPublished - May 2021

Keywords

  • android
  • dataset
  • Duplication
  • machine learning
  • malware detection

Cite this