Imperceptible and Reliable Adversarial Attack
Communications in Computer and Information Science, ISSN: 1865-0937, Vol: 1558 CCIS, Page: 49-62
2022
- 2Citations
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Citations2
- Citation Indexes2
Conference Paper Description
Deep neural networks are vulnerable to adversarial examples, which can fool classifiers by adding small perturbations. Various adversarial attack methods have been proposed in the past several years, and most of them add the perturbation in a “sparse” or “global” way. Since the number of pixels perturbed by the “sparse" method and the perturbation intensity of each pixel added by the “global" method are both small, the adversarial property can be destroyed easily. Finally, it makes the adversarial attack and the adversarial training based on these samples unreliable. To address this issue, we present an “pixel-wise" method which is somewhere in between the “sparse” or “global” way. First, the perception of human eyes to the error of different image regions is different. Second, image processing methods have different effects on the different areas of the image. Based on these two considerations, we propose an imperceptible and reliable adversarial attack method, which projects the perturbation to the different areas differently. Extensive experiments demonstrate our method can preserve the attack ability while maintaining good view quality. More importantly, the proposed projection can be combined with existing attack methods to generate a stronger generation algorithm which improves the robustness of adversarial examples. Based on the proposed method, the reliability of adversarial attacks can be greatly improved.
Bibliographic Details
http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85126232760&origin=inward; http://dx.doi.org/10.1007/978-981-19-0523-0_4; https://link.springer.com/10.1007/978-981-19-0523-0_4; https://dx.doi.org/10.1007/978-981-19-0523-0_4; https://link.springer.com/chapter/10.1007/978-981-19-0523-0_4
Springer Science and Business Media LLC
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know