Untargeted white-box adversarial attack to break into deep leaning based COVID-19 monitoring face mask detection system

Multimed Tools Appl. 2023 May 5:1-27. doi: 10.1007/s11042-023-15405-x. Online ahead of print.

Abstract

The face mask detection system has been a valuable tool to combat COVID-19 by preventing its rapid transmission. This article demonstrated that the present deep learning-based face mask detection systems are vulnerable to adversarial attacks. We proposed a framework for a robust face mask detection system that is resistant to adversarial attacks. We first developed a face mask detection system by fine-tuning the MobileNetv2 model and training it on the custom-built dataset. The model performed exceptionally well, achieving 95.83% of accuracy on test data. Then, the model's performance is assessed using adversarial images calculated by the fast gradient sign method (FGSM). The FGSM attack reduced the model's classification accuracy from 95.83% to 14.53%, indicating that the adversarial attack on the proposed model severely damaged its performance. Finally, we illustrated that the proposed robust framework enhanced the model's resistance to adversarial attacks. Although there was a notable drop in the accuracy of the robust model on unseen clean data from 95.83% to 92.79%, the model performed exceptionally well, improving the accuracy from 14.53% to 92% on adversarial data. We expect our research to heighten awareness of adversarial attacks on COVID-19 monitoring systems and inspire others to protect healthcare systems from similar attacks.

Keywords: Adversarial attacks; Adversarial example; COVID-19; Deep learning; Face mask recognition; Robustness.