TY - JOUR
T1 - Efficient and transferable reversible adversarial attacks utilizing YUV color space
AU - Fan, Yucheng
AU - Yin, Zhaoxia
AU - Chen, Jiawei
AU - Lyu, Wanli
N1 - Publisher Copyright:
© 2025 Elsevier B.V.
PY - 2025/11/1
Y1 - 2025/11/1
N2 - Adversarial attacks, which involve adding subtle perturbations to images, pose a significant threat to the secure deployment of deep neural networks. However, when integrated with reversible data hiding (RDH) technology, generated adversarial examples (AEs) can both prevent malicious identification and enable error-free recovery of the original image. This technique is known as error-free reversible adversarial attack. Despite its potential, existing error-free reversible adversarial attack methods primarily focus on feasibility, attack success rate, and image quality, neglecting cross-model transferability and ineffective perturbations, such as embedding-overwritten and generation-redundant perturbations. These issues result in relatively slow operational speeds and limit their applicability to unknown models. To address these challenges, a novel error-free reversible adversarial attack method based on the YUV color space is proposed. By separating the luminance and chrominance channels, this space allows for more efficient image processing. Our method adopts a dual-strategy design: Y-channel attacks (e.g., YFGSM, YI-FGSM, YPGD) are used to eliminate generation-redundant perturbations, while the embedding of perturbation information into the UV channels avoids overwriting, thereby enhancing both transferability and computational efficiency. Furthermore, an ensemble-based attack strategy is introduced to further improve cross-model performance. Experimental results demonstrate that our method not only enables error-free recovery of the original image but also maintains high visual quality, achieves high operational speed, and exhibits strong transferability across multiple models.
AB - Adversarial attacks, which involve adding subtle perturbations to images, pose a significant threat to the secure deployment of deep neural networks. However, when integrated with reversible data hiding (RDH) technology, generated adversarial examples (AEs) can both prevent malicious identification and enable error-free recovery of the original image. This technique is known as error-free reversible adversarial attack. Despite its potential, existing error-free reversible adversarial attack methods primarily focus on feasibility, attack success rate, and image quality, neglecting cross-model transferability and ineffective perturbations, such as embedding-overwritten and generation-redundant perturbations. These issues result in relatively slow operational speeds and limit their applicability to unknown models. To address these challenges, a novel error-free reversible adversarial attack method based on the YUV color space is proposed. By separating the luminance and chrominance channels, this space allows for more efficient image processing. Our method adopts a dual-strategy design: Y-channel attacks (e.g., YFGSM, YI-FGSM, YPGD) are used to eliminate generation-redundant perturbations, while the embedding of perturbation information into the UV channels avoids overwriting, thereby enhancing both transferability and computational efficiency. Furthermore, an ensemble-based attack strategy is introduced to further improve cross-model performance. Experimental results demonstrate that our method not only enables error-free recovery of the original image but also maintains high visual quality, achieves high operational speed, and exhibits strong transferability across multiple models.
KW - Ensemble model
KW - Reversible adversarial attack
KW - Transferability
KW - Y-channel attack
KW - YUV color space
UR - https://www.scopus.com/pages/publications/105011849004
U2 - 10.1016/j.neucom.2025.131088
DO - 10.1016/j.neucom.2025.131088
M3 - 文章
AN - SCOPUS:105011849004
SN - 0925-2312
VL - 652
JO - Neurocomputing
JF - Neurocomputing
M1 - 131088
ER -