TY - GEN
T1 - Medical Image Classification Attack Based on Texture Manipulation
AU - Gu, Yunrui
AU - Kong, Cong
AU - Yin, Zhaoxia
AU - Wang, Yan
AU - Li, Qingli
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
PY - 2025
Y1 - 2025
N2 - The security of artificial intelligence systems has received great attention, especially in the field of smart medical diagnosis in over the past few years. In order to enhance the security of smart medical systems, it is important to study adversarial attack methods to increase defense performance, and the central aspect of adversarial attacks lies in crafting effective strategies that can integrate covert malicious behaviors within the system. However, due to the diversity of medical imaging modes and dimensions, creating a unified attack approach that produces imperceptible examples with high content similarity and applies them across various medical image classification systems presents significant challenges. Most existing attack methods aim at attacking natural image classification models, which inevitably add global noise to the image and make the attack more visible, simultaneously does not taking into account that medical image classification task considers texture information more. To address this issue, we propose a new adversarial attack method based on changing texture information that utilizes the CycleGAN approach, while also incorporating AdvGAN to ensure the attack success rate. Our method can provide attacks in various medical image classification tasks. Our experiment includes two public medical image datasets, including chest X-Ray image dataset and melanoma dermoscopy dataset, which contain different imaging modes and dimensions. The results indicate that our model has superior performance in attacking medical image classification tasks in different imaging modes and dimensions compared to other state-of-the-art adversarial attack methods.
AB - The security of artificial intelligence systems has received great attention, especially in the field of smart medical diagnosis in over the past few years. In order to enhance the security of smart medical systems, it is important to study adversarial attack methods to increase defense performance, and the central aspect of adversarial attacks lies in crafting effective strategies that can integrate covert malicious behaviors within the system. However, due to the diversity of medical imaging modes and dimensions, creating a unified attack approach that produces imperceptible examples with high content similarity and applies them across various medical image classification systems presents significant challenges. Most existing attack methods aim at attacking natural image classification models, which inevitably add global noise to the image and make the attack more visible, simultaneously does not taking into account that medical image classification task considers texture information more. To address this issue, we propose a new adversarial attack method based on changing texture information that utilizes the CycleGAN approach, while also incorporating AdvGAN to ensure the attack success rate. Our method can provide attacks in various medical image classification tasks. Our experiment includes two public medical image datasets, including chest X-Ray image dataset and melanoma dermoscopy dataset, which contain different imaging modes and dimensions. The results indicate that our model has superior performance in attacking medical image classification tasks in different imaging modes and dimensions compared to other state-of-the-art adversarial attack methods.
KW - Adversarial attack
KW - Medical diagnosis
KW - Texture
UR - https://www.scopus.com/pages/publications/85212308340
U2 - 10.1007/978-3-031-78198-8_3
DO - 10.1007/978-3-031-78198-8_3
M3 - 会议稿件
AN - SCOPUS:85212308340
SN - 9783031781971
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 33
EP - 48
BT - Pattern Recognition - 27th International Conference, ICPR 2024, Proceedings
A2 - Antonacopoulos, Apostolos
A2 - Chaudhuri, Subhasis
A2 - Chellappa, Rama
A2 - Liu, Cheng-Lin
A2 - Bhattacharya, Saumik
A2 - Pal, Umapada
PB - Springer Science and Business Media Deutschland GmbH
T2 - 27th International Conference on Pattern Recognition, ICPR 2024
Y2 - 1 December 2024 through 5 December 2024
ER -