|Authors||M. K. Ahuja, A. Gotlieb and H. Spieker|
|Title||Testing Deep Learning Models: A First Comparative Study of Multiple Testing Techniques|
|Project(s)||Department of Validation Intelligence for Autonomous Software Systems, Testing of Learning Robots (T-LARGO) , Testing of Learning Robots (T-Largo)|
|Publication Type||Proceedings, refereed|
|Year of Publication||2022|
|Conference Name||Artificial Intelligence in Software Testing @ 2022 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)|
Deep Learning (DL) has revolutionized the capabilities of vision-based systems (VBS) in critical applications such as autonomous driving, robotic surgery, critical infrastructure surveillance, air and maritime traffic control, etc. By analyzing images, voice, videos, or any type of complex signals, DL has considerably increased the situation awareness of these systems. At the same time, while relying more and more on trained DL models, the reliability and robustness of VBS have been challenged and it has become crucial to test thoroughly these models to assess their capabilities and potential errors. To discover faults in DL models, existing software testing methods have been adapted and refined accordingly. In this article, we provide an overview of these software testing methods, namely differential, metamorphic, mutation, and combinatorial testing, as well as adversarial perturbation testing and review some challenges in their deployment for boosting perception systems used in VBS. We also provide a first experimental comparative study on a classical benchmark used in VBS and discuss its results.