TY - GEN
T1 - Imperceptible Adversarial Attack on S Channel of HSV Colorspace
AU - Zhu, Tong
AU - Yin, Zhaoxia
AU - Lyu, Wanli
AU - Zhang, Jiefei
AU - Luo, Bin
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Deep neural network models are vulnerable to subtle but adversarial perturbations that alter the model. Adversarial perturbations are typically computed for RGB images and, therefore, are evenly distributed among RGB channels. Compared with RGB images, HSV images can express the Hue, saturation, and brightness more intuitively. We find that the adversarial perturbation in the S-channel ensures a high attack success rate, while the perturbation is small, and the visual quality of the adversarial examples is good. Using this finding, we propose an attack method, SPGD, to improve the visual quality of adversarial examples by generating perturbations on the S-channel. Based on the attack principle of the PGD method, the RGB image was converted into an HSV image. The gradient calculated by the model on the S channel was superimposed on the S channel and then combined with the non-interference H and V channels to convert back to the RGB image. The iteration stops until the attack succeed. We compare the SPGD method with the existing state-of-the-art attack methods. The results show that SPGD minimizes pixel perturbation while maintaining a high attack success rate and achieves the best results in terms of structural similarity, imperceptibility, the minimum number of iterations, and the shortest run time.
AB - Deep neural network models are vulnerable to subtle but adversarial perturbations that alter the model. Adversarial perturbations are typically computed for RGB images and, therefore, are evenly distributed among RGB channels. Compared with RGB images, HSV images can express the Hue, saturation, and brightness more intuitively. We find that the adversarial perturbation in the S-channel ensures a high attack success rate, while the perturbation is small, and the visual quality of the adversarial examples is good. Using this finding, we propose an attack method, SPGD, to improve the visual quality of adversarial examples by generating perturbations on the S-channel. Based on the attack principle of the PGD method, the RGB image was converted into an HSV image. The gradient calculated by the model on the S channel was superimposed on the S channel and then combined with the non-interference H and V channels to convert back to the RGB image. The iteration stops until the attack succeed. We compare the SPGD method with the existing state-of-the-art attack methods. The results show that SPGD minimizes pixel perturbation while maintaining a high attack success rate and achieves the best results in terms of structural similarity, imperceptibility, the minimum number of iterations, and the shortest run time.
KW - HSV
KW - adversarial attack
KW - imperceptibility
UR - https://www.scopus.com/pages/publications/85169550853
U2 - 10.1109/IJCNN54540.2023.10191049
DO - 10.1109/IJCNN54540.2023.10191049
M3 - 会议稿件
AN - SCOPUS:85169550853
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - IJCNN 2023 - International Joint Conference on Neural Networks, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 International Joint Conference on Neural Networks, IJCNN 2023
Y2 - 18 June 2023 through 23 June 2023
ER -