GradMDM: Adversarial attack on dynamic networks

Jianhong Pan, Lin Geng Foo, Qichen Zheng, Zhipeng Fan, Hossein Rahmani, Qiuhong Ke, Jun Liu

Research output: Contribution to journalArticleResearchpeer-review

2 Citations (Scopus)

Abstract

Dynamic neural networks can greatly reduce computation redundancy without compromising accuracy by adapting their structures based on the input. In this paper, we explore the robustness of dynamic neural networks against energy-oriented attacks targeted at reducing their efficiency. Specifically, we attack dynamic models with our novel algorithm GradMDM. GradMDM is a technique that adjusts the direction and the magnitude of the gradients to effectively find a small perturbation for each input, that will activate more computational units of dynamic models during inference. We evaluate GradMDM on multiple datasets and dynamic models, where it outperforms previous energy-oriented attack techniques, significantly increasing computation complexity while reducing the perceptibility of the perturbations https://github.com/lingengfoo/GradMDM .

Original languageEnglish
Pages (from-to)11374-11381
Number of pages8
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume45
Issue number9
DOIs
Publication statusPublished - 1 Sept 2023

Keywords

  • Computational efficiency
  • Computational modeling
  • Computer architecture
  • Logic gates
  • Neural networks
  • Perturbation methods
  • Robustness

Cite this