Document Type
Conference Proceeding
Publication Date
8-22-2024
Publication Title
2024 Silicon Valley Cybersecurity Conference (SVCC)
Pages
1-7
Publisher Name
IEEE
Abstract
Rapid advancements of deep learning are accelerating adoption in a wide variety of applications, including safety-critical applications such as self-driving vehicles, drones, robots, and surveillance systems. These advancements include applying variations of sophisticated techniques that improve the performance of models. However, such models are not immune to adversarial manipulations, which can cause the system to misbehave and remain unnoticed by experts. The frequency of modifications to existing deep learning models necessitates thorough analysis to determine the impact on models’ robustness. In this work, we present an experimental evaluation of the effects of model modifications on deep learning model robustness using adversarial attacks. Our methodology involves examining the robustness of variations of models against various adversarial attacks. By conducting our experiments, we aim to shed light on the critical issue of maintaining the reliability and safety of deep learning models in safety- and security-critical applications. Our results indicate the pressing demand for an in-depth assessment of the effects of model changes on the robustness of models.
Recommended Citation
Juraev, Firuz; Abuhamad, Mohammed; Woo, Simon S.; Thiruvathukal, George K.; and Abuhmed, Tamer. The Impact of Model Variations on the Robustness of Deep Learning Models in Adversarial Settings. 2024 Silicon Valley Cybersecurity Conference (SVCC), , : 1-7, 2024. Retrieved from Loyola eCommons, Computer Science: Faculty Publications and Other Works, http://dx.doi.org/10.1109/SVCC61185.2024.10637362
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Copyright Statement
© IEEE, 2024.
Author Manuscript
This is a pre-publication author manuscript of the final, published article.

Comments
Author Posting © IEEE, 2024. This is the authors' version of the work. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The definitive version of this work was published in the proceedings of the 2024 IEEE Silicon Valley Cybersecurity Conference (August 22, 2024), https://doi.org/10.1109/SVCC61185.2024.10637362.