Enhancing the adversarial robustness in medical image classification: exploring adversarial machine learning with vision transformers-based models
Neural Computing and Applications, ISSN: 1433-3058
2024
- 1Captures
Metric Options: Counts1 Year3 YearSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Captures1
- Readers1
Article Description
Despite their remarkable achievements in computer-aided medical image-related tasks, including detection, segmentation, and classification, deep learning techniques remain vulnerable to imperceptible adversarial attacks, which could lead to potential misdiagnosis in clinical applications. Therefore, adversarial attacks and their defense in deep medical diagnosis systems have been witnessed remarkable progress in recent years. Although the importance of transformers in various medical applications has grown immensely, a critical concern for the reliability and security of their susceptibility to adversarial attacks has not yet been sufficiently investigated. Furthermore, many studies in the field of ViT-based adversarial machine learning mainly focus on pure ViT architecture. To this end, this paper provides a comprehensive evaluation, comparison, and analysis of state-of-the-art ViT-based models such as ViT, DeiT, Swin transformer, and PVTv2 for their robustness against FGSM and PGD adversarial machine learning attacks and investigates the impact of the k-step PGD adversarial training defense mechanism in the domain of various medical imaging tasks. The findings indicate that ViT-based models are vulnerable to adversarial attacks even at a small perturbation degree. Furthermore, the significant drop in accuracy from around 90.0% underlines the vulnerability of these models to adversarial attacks and highlights the urgent need for robust defenses. We also conclude that the ViT-based models show significant robustness against adversarial attacks using adversarial training, i.e., the defense strategy can achieve improved classification accuracy, close to the clean image accuracy. By analyzing quantitative results, we believe that this study aims to fill the gap in research on the robustness of ViT-based models to adversarial machine learning attacks in medical image analysis, highlighting future research directions.
Bibliographic Details
Springer Science and Business Media LLC
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know