PlumX Metrics
Embed PlumX Metrics

Imperceptible and Reliable Adversarial Attack

Communications in Computer and Information Science, ISSN: 1865-0937, Vol: 1558 CCIS, Page: 49-62
2022
  • 2
    Citations
  • 0
    Usage
  • 0
    Captures
  • 0
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

  • Citations
    2

Conference Paper Description

Deep neural networks are vulnerable to adversarial examples, which can fool classifiers by adding small perturbations. Various adversarial attack methods have been proposed in the past several years, and most of them add the perturbation in a “sparse” or “global” way. Since the number of pixels perturbed by the “sparse" method and the perturbation intensity of each pixel added by the “global" method are both small, the adversarial property can be destroyed easily. Finally, it makes the adversarial attack and the adversarial training based on these samples unreliable. To address this issue, we present an “pixel-wise" method which is somewhere in between the “sparse” or “global” way. First, the perception of human eyes to the error of different image regions is different. Second, image processing methods have different effects on the different areas of the image. Based on these two considerations, we propose an imperceptible and reliable adversarial attack method, which projects the perturbation to the different areas differently. Extensive experiments demonstrate our method can preserve the attack ability while maintaining good view quality. More importantly, the proposed projection can be combined with existing attack methods to generate a stronger generation algorithm which improves the robustness of adversarial examples. Based on the proposed method, the reliability of adversarial attacks can be greatly improved.

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know