Towards characterizing adversarial defects of deep learning software from the lens of uncertainty

Xiyue Zhang, Xiaofei Xie, Lei Ma, Xiaoning Du, Qiang Hu, Yang Liu, Jianjun Zhao, Meng Sun

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

5 Citations (Scopus)

Abstract

Over the past decade, deep learning (DL) has been successfully applied to many industrial domain-specific tasks. However, the current state-of-the-art DL software still suffers from quality issues, which raises great concern especially in the context of safety- and security-critical scenarios. Adversarial examples (AEs) represent a typical and important type of defects needed to be urgently addressed, on which a DL software makes incorrect decisions. Such defects occur through either intentional attack or physical-world noise perceived by input sensors, potentially hindering further industry deployment. The intrinsic uncertainty nature of deep learning decisions can be a fundamental reason for its incorrect behavior. Although some testing, adversarial attack and defense techniques have been recently proposed, it still lacks a systematic study to uncover the relationship between AEs and DL uncertainty. In this paper, we conduct a large-scale study towards bridging this gap. We first investigate the capability of multiple uncertainty metrics in differentiating benign examples (BEs) and AEs, which enables to characterize the uncertainty patterns of input data. Then, we identify and categorize the uncertainty patterns of BEs and AEs, and find that while BEs and AEs generated by existing methods do follow common uncertainty patterns, some other uncertainty patterns are largely missed. Based on this, we propose an automated testing technique to generate multiple types of uncommon AEs and BEs that are largely missed by existing techniques. Our further evaluation reveals that the uncommon data generated by our method is hard to be defended by the existing defense techniques with the average defense success rate reduced by 35%. Our results call for attention and necessity to generate more diverse data for evaluating quality assurance solutions of DL software.

Original languageEnglish
Title of host publicationProceedings - 2020 ACM/IEEE 42nd International Conference on Software Engineering, ICSE 2020
EditorsJane Cleland-Huang, Darko Marinov
Place of PublicationNew York NY USA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages739-751
Number of pages13
ISBN (Electronic)9781450371216
DOIs
Publication statusPublished - 2020
Externally publishedYes
EventInternational Conference on Software Engineering 2020 - Virtual, Online, Korea, Republic of (South)
Duration: 27 Jun 202019 Jul 2020
Conference number: 42nd
https://dl.acm.org/doi/proceedings/10.1145/3377811 (Proceedings)
https://conf.researchr.org/home/icse-2020 (Website)

Publication series

NameProceedings - International Conference on Software Engineering
PublisherThe Association for Computing Machinery
ISSN (Print)0270-5257

Conference

ConferenceInternational Conference on Software Engineering 2020
Abbreviated titleICSE 2020
CountryKorea, Republic of (South)
CityVirtual, Online
Period27/06/2019/07/20
Internet address

Keywords

  • Adversarial attack
  • Deep learning
  • Software testing
  • Uncertainty

Cite this