AuthorsM. K. Ahuja, A. Gotlieb and H. Spieker
TitleTesting Deep Learning Models: A First Comparative Study of Multiple Testing Techniques
AfilliationSoftware Engineering
Project(s)Department of Validation Intelligence for Autonomous Software Systems, Testing of Learning Robots (T-LARGO) , Testing of Learning Robots (T-Largo)
StatusPublished
Publication TypeProceedings, refereed
Year of Publication2022
Conference NameArtificial Intelligence in Software Testing @ 2022 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)
Publisher IEEE
ISBN Number978-1-6654-9628-5
Other NumbersarXiv:2202.12139
Abstract

Deep Learning (DL) has revolutionized the capabilities of vision-based systems (VBS) in critical applications such as autonomous driving, robotic surgery, critical infrastructure surveillance, air and maritime traffic control, etc. By analyzing images, voice, videos, or any type of complex signals, DL has considerably increased the situation awareness of these systems. At the same time, while relying more and more on trained DL models, the reliability and robustness of VBS have been challenged and it has become crucial to test thoroughly these models to assess their capabilities and potential errors. To discover faults in DL models, existing software testing methods have been adapted and refined accordingly. In this article, we provide an overview of these software testing methods, namely differential, metamorphic, mutation, and combinatorial testing, as well as adversarial perturbation testing and review some challenges in their deployment for boosting perception systems used in VBS. We also provide a first experimental comparative study on a classical benchmark used in VBS and discuss its results.

URLhttps://ieeexplore.ieee.org/abstract/document/9787976
DOI10.1109/ICSTW55395.2022.00035
Citation Key33066

Contact person