Beyond Accuracy: An Empirical Study on Unit Testing in Open-source Deep Learning Projects

Han Wang, Sijia Yu, Chunyang Chen, Burak Turhan, Xiaodong Zhu

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Deep Learning (DL) models have rapidly advanced, focusing on achieving high performance through testing model accuracy and robustness. However, it is unclear whether DL projects, as software systems, are tested thoroughly or functionally correct when there is a need to treat and test them like other software systems. Therefore, we empirically study the unit tests in open-source DL projects, analyzing 9,129 projects from GitHub. We find that: (1) unit tested DL projects have positive correlation with the open-source project metrics and have a higher acceptance rate of pull requests; (2) 68% of the sampled DL projects are not unit tested at all; (3) the layer and utilities (utils) of DL models have the most unit tests. Based on these findings and previous research outcomes, we built a mapping taxonomy between unit tests and faults in DL projects. We discuss the implications of our findings for developers and researchers and highlight the need for unit testing in open-source DL projects to ensure their reliability and stability. The study contributes to this community by raising awareness of the importance of unit testing in DL projects and encouraging further research in this area.

Original languageEnglish
Article number104
Number of pages22
JournalACM Transactions on Software Engineering and Methodology
Volume33
Issue number4
DOIs
Publication statusPublished - 18 Apr 2024

Keywords

  • Deep learning
  • unit testing

Cite this