PlumX Metrics
Embed PlumX Metrics

Enhancing the adversarial robustness in medical image classification: exploring adversarial machine learning with vision transformers-based models

Neural Computing and Applications, ISSN: 1433-3058
2024
  • 0
    Citations
  • 0
    Usage
  • 1
    Captures
  • 0
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

Article Description

Despite their remarkable achievements in computer-aided medical image-related tasks, including detection, segmentation, and classification, deep learning techniques remain vulnerable to imperceptible adversarial attacks, which could lead to potential misdiagnosis in clinical applications. Therefore, adversarial attacks and their defense in deep medical diagnosis systems have been witnessed remarkable progress in recent years. Although the importance of transformers in various medical applications has grown immensely, a critical concern for the reliability and security of their susceptibility to adversarial attacks has not yet been sufficiently investigated. Furthermore, many studies in the field of ViT-based adversarial machine learning mainly focus on pure ViT architecture. To this end, this paper provides a comprehensive evaluation, comparison, and analysis of state-of-the-art ViT-based models such as ViT, DeiT, Swin transformer, and PVTv2 for their robustness against FGSM and PGD adversarial machine learning attacks and investigates the impact of the k-step PGD adversarial training defense mechanism in the domain of various medical imaging tasks. The findings indicate that ViT-based models are vulnerable to adversarial attacks even at a small perturbation degree. Furthermore, the significant drop in accuracy from around 90.0% underlines the vulnerability of these models to adversarial attacks and highlights the urgent need for robust defenses. We also conclude that the ViT-based models show significant robustness against adversarial attacks using adversarial training, i.e., the defense strategy can achieve improved classification accuracy, close to the clean image accuracy. By analyzing quantitative results, we believe that this study aims to fill the gap in research on the robustness of ViT-based models to adversarial machine learning attacks in medical image analysis, highlighting future research directions.

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know