DeepGauge

multi-granularity testing criteria for deep learning systems

Lei Ma, Felix Juefei-Xu, Fuyuan Zhang, Jiyuan Sun, Minhui Xue, Bo Li, Chunyang Chen, Ting Su, Li Li, Yang Liu, Jianjun Zhao, Yadong Wang

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

22 Citations (Scopus)

Abstract

Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic of a crafted neuron network through a set of training data. We have seen wide adoption of DL in many safety-critical scenarios. However, a plethora of studies have shown that the state-of-the-art DL systems suffer from various vulnerabilities which can lead to severe consequences when applied to real-world applications. Currently, the testing adequacy of a DL system is usually measured by the accuracy of test data. Considering the limitation of accessible high quality test data, good accuracy performance on test data can hardly provide confidence to the testing adequacy and generality of DL systems. Unlike traditional software systems that have clear and controllable logic and functionality, the lack of interpretability in a DL system makes system analysis and defect detection difficult, which could potentially hinder its real-world deployment. In this paper, we propose DeepGauge, a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed. The in-depth evaluation of our proposed testing criteria is demonstrated on two well-known datasets, five DL systems, and with four state-of-the-art adversarial attack techniques against DL. The potential usefulness of DeepGauge sheds light on the construction of more generic and robust DL systems.

Original languageEnglish
Title of host publicationASE'18 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering
Subtitle of host publicationSeptember 3–7, 2018 Montpellier, France
EditorsGordon Fraser, Christian Kastner
Place of PublicationNew York NY USA
PublisherAssociation for Computing Machinery (ACM)
Pages120-131
Number of pages12
ISBN (Electronic)9781450359375
DOIs
Publication statusPublished - 2018
EventAutomated Software Engineering Conference 2018 - Corum Conference Center, Montpellier, France
Duration: 3 Sep 20187 Sep 2018
Conference number: 33rd
http://www.ase2018.com/

Conference

ConferenceAutomated Software Engineering Conference 2018
Abbreviated titleASE 2018
CountryFrance
CityMontpellier
Period3/09/187/09/18
Internet address

Keywords

  • Deep learning
  • Deep neural networks
  • Software testing
  • Testing criteria

Cite this

Ma, L., Juefei-Xu, F., Zhang, F., Sun, J., Xue, M., Li, B., ... Wang, Y. (2018). DeepGauge: multi-granularity testing criteria for deep learning systems. In G. Fraser, & C. Kastner (Eds.), ASE'18 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering: September 3–7, 2018 Montpellier, France (pp. 120-131). New York NY USA: Association for Computing Machinery (ACM). https://doi.org/10.1145/3238147.3238202
Ma, Lei ; Juefei-Xu, Felix ; Zhang, Fuyuan ; Sun, Jiyuan ; Xue, Minhui ; Li, Bo ; Chen, Chunyang ; Su, Ting ; Li, Li ; Liu, Yang ; Zhao, Jianjun ; Wang, Yadong. / DeepGauge : multi-granularity testing criteria for deep learning systems. ASE'18 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering: September 3–7, 2018 Montpellier, France. editor / Gordon Fraser ; Christian Kastner. New York NY USA : Association for Computing Machinery (ACM), 2018. pp. 120-131
@inproceedings{0d1ef730053f4f08b881bff359ab06d9,
title = "DeepGauge: multi-granularity testing criteria for deep learning systems",
abstract = "Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic of a crafted neuron network through a set of training data. We have seen wide adoption of DL in many safety-critical scenarios. However, a plethora of studies have shown that the state-of-the-art DL systems suffer from various vulnerabilities which can lead to severe consequences when applied to real-world applications. Currently, the testing adequacy of a DL system is usually measured by the accuracy of test data. Considering the limitation of accessible high quality test data, good accuracy performance on test data can hardly provide confidence to the testing adequacy and generality of DL systems. Unlike traditional software systems that have clear and controllable logic and functionality, the lack of interpretability in a DL system makes system analysis and defect detection difficult, which could potentially hinder its real-world deployment. In this paper, we propose DeepGauge, a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed. The in-depth evaluation of our proposed testing criteria is demonstrated on two well-known datasets, five DL systems, and with four state-of-the-art adversarial attack techniques against DL. The potential usefulness of DeepGauge sheds light on the construction of more generic and robust DL systems.",
keywords = "Deep learning, Deep neural networks, Software testing, Testing criteria",
author = "Lei Ma and Felix Juefei-Xu and Fuyuan Zhang and Jiyuan Sun and Minhui Xue and Bo Li and Chunyang Chen and Ting Su and Li Li and Yang Liu and Jianjun Zhao and Yadong Wang",
year = "2018",
doi = "10.1145/3238147.3238202",
language = "English",
pages = "120--131",
editor = "Gordon Fraser and Christian Kastner",
booktitle = "ASE'18 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering",
publisher = "Association for Computing Machinery (ACM)",
address = "United States of America",

}

Ma, L, Juefei-Xu, F, Zhang, F, Sun, J, Xue, M, Li, B, Chen, C, Su, T, Li, L, Liu, Y, Zhao, J & Wang, Y 2018, DeepGauge: multi-granularity testing criteria for deep learning systems. in G Fraser & C Kastner (eds), ASE'18 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering: September 3–7, 2018 Montpellier, France. Association for Computing Machinery (ACM), New York NY USA, pp. 120-131, Automated Software Engineering Conference 2018, Montpellier, France, 3/09/18. https://doi.org/10.1145/3238147.3238202

DeepGauge : multi-granularity testing criteria for deep learning systems. / Ma, Lei; Juefei-Xu, Felix; Zhang, Fuyuan; Sun, Jiyuan; Xue, Minhui; Li, Bo; Chen, Chunyang; Su, Ting; Li, Li; Liu, Yang; Zhao, Jianjun; Wang, Yadong.

ASE'18 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering: September 3–7, 2018 Montpellier, France. ed. / Gordon Fraser; Christian Kastner. New York NY USA : Association for Computing Machinery (ACM), 2018. p. 120-131.

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

TY - GEN

T1 - DeepGauge

T2 - multi-granularity testing criteria for deep learning systems

AU - Ma, Lei

AU - Juefei-Xu, Felix

AU - Zhang, Fuyuan

AU - Sun, Jiyuan

AU - Xue, Minhui

AU - Li, Bo

AU - Chen, Chunyang

AU - Su, Ting

AU - Li, Li

AU - Liu, Yang

AU - Zhao, Jianjun

AU - Wang, Yadong

PY - 2018

Y1 - 2018

N2 - Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic of a crafted neuron network through a set of training data. We have seen wide adoption of DL in many safety-critical scenarios. However, a plethora of studies have shown that the state-of-the-art DL systems suffer from various vulnerabilities which can lead to severe consequences when applied to real-world applications. Currently, the testing adequacy of a DL system is usually measured by the accuracy of test data. Considering the limitation of accessible high quality test data, good accuracy performance on test data can hardly provide confidence to the testing adequacy and generality of DL systems. Unlike traditional software systems that have clear and controllable logic and functionality, the lack of interpretability in a DL system makes system analysis and defect detection difficult, which could potentially hinder its real-world deployment. In this paper, we propose DeepGauge, a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed. The in-depth evaluation of our proposed testing criteria is demonstrated on two well-known datasets, five DL systems, and with four state-of-the-art adversarial attack techniques against DL. The potential usefulness of DeepGauge sheds light on the construction of more generic and robust DL systems.

AB - Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic of a crafted neuron network through a set of training data. We have seen wide adoption of DL in many safety-critical scenarios. However, a plethora of studies have shown that the state-of-the-art DL systems suffer from various vulnerabilities which can lead to severe consequences when applied to real-world applications. Currently, the testing adequacy of a DL system is usually measured by the accuracy of test data. Considering the limitation of accessible high quality test data, good accuracy performance on test data can hardly provide confidence to the testing adequacy and generality of DL systems. Unlike traditional software systems that have clear and controllable logic and functionality, the lack of interpretability in a DL system makes system analysis and defect detection difficult, which could potentially hinder its real-world deployment. In this paper, we propose DeepGauge, a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed. The in-depth evaluation of our proposed testing criteria is demonstrated on two well-known datasets, five DL systems, and with four state-of-the-art adversarial attack techniques against DL. The potential usefulness of DeepGauge sheds light on the construction of more generic and robust DL systems.

KW - Deep learning

KW - Deep neural networks

KW - Software testing

KW - Testing criteria

UR - http://www.scopus.com/inward/record.url?scp=85056490436&partnerID=8YFLogxK

U2 - 10.1145/3238147.3238202

DO - 10.1145/3238147.3238202

M3 - Conference Paper

SP - 120

EP - 131

BT - ASE'18 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering

A2 - Fraser, Gordon

A2 - Kastner, Christian

PB - Association for Computing Machinery (ACM)

CY - New York NY USA

ER -

Ma L, Juefei-Xu F, Zhang F, Sun J, Xue M, Li B et al. DeepGauge: multi-granularity testing criteria for deep learning systems. In Fraser G, Kastner C, editors, ASE'18 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering: September 3–7, 2018 Montpellier, France. New York NY USA: Association for Computing Machinery (ACM). 2018. p. 120-131 https://doi.org/10.1145/3238147.3238202