TY - GEN
T1 - Transferable Adversarial Examples with Bayesian Approach
AU - Fan, Mingyuan
AU - Chen, Cen
AU - Zhou, Wenmeng
AU - Wang, Yinggui
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s). Publication rights licensed to ACM.
PY - 2025/8/24
Y1 - 2025/8/24
N2 - The vulnerability of deep neural networks (DNNs) to black-box adversarial attacks is one of the most heated topics in trustworthy AI. In such attacks, the attackers operate without any insider knowledge of the model, making the cross-model transferability of adversarial examples critical. Despite the potential for adversarial examples to be effective across various models, it has been observed that adversarial examples that are specifically crafted for a specific model often exhibit poor transferability. In this paper, we explore the transferability of adversarial examples via the lens of Bayesian approach. Specifically, we leverage Bayesian approach to probe the transferability and then study what constitutes a transferability-promoting prior. Following this, we design two concrete transferability-promoting priors, along with an adaptive dynamic weighting strategy for instances sampled from these priors. Employing these techniques, we present BayAtk. Extensive experiments illustrate the significant effectiveness of BayAtk in crafting more transferable adversarial examples against both undefended and defended black-box models compared to existing state-of-the-art attacks.
AB - The vulnerability of deep neural networks (DNNs) to black-box adversarial attacks is one of the most heated topics in trustworthy AI. In such attacks, the attackers operate without any insider knowledge of the model, making the cross-model transferability of adversarial examples critical. Despite the potential for adversarial examples to be effective across various models, it has been observed that adversarial examples that are specifically crafted for a specific model often exhibit poor transferability. In this paper, we explore the transferability of adversarial examples via the lens of Bayesian approach. Specifically, we leverage Bayesian approach to probe the transferability and then study what constitutes a transferability-promoting prior. Following this, we design two concrete transferability-promoting priors, along with an adaptive dynamic weighting strategy for instances sampled from these priors. Employing these techniques, we present BayAtk. Extensive experiments illustrate the significant effectiveness of BayAtk in crafting more transferable adversarial examples against both undefended and defended black-box models compared to existing state-of-the-art attacks.
KW - Adversarial examples
KW - Deep neural networks
KW - Transferability
UR - https://www.scopus.com/pages/publications/105015982395
U2 - 10.1145/3708821.3710827
DO - 10.1145/3708821.3710827
M3 - 会议稿件
AN - SCOPUS:105015982395
T3 - Proceedings of the ACM Conference on Computer and Communications Security
SP - 517
EP - 529
BT - ACM ASIA CCS 2025 - Proceedings of the 20th ACM ASIA Conference on Computer and Communications Security
PB - Association for Computing Machinery
T2 - 20th ACM ASIA Conference on Computer and Communications Security, ASIA CCS 2025
Y2 - 25 August 2025 through 29 August 2025
ER -