TY - JOUR
T1 - Exploiting Pre-Trained Models and Low-Frequency Preference for Cost-Effective Transfer-based Attack
AU - Fan, Mingyuan
AU - Chen, Cen
AU - Wang, Chengyu
AU - Huang, Jun
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s). Publication rights licensed to ACM.
PY - 2025/2/14
Y1 - 2025/2/14
N2 - The transferability of adversarial examples enables practical transfer-based attacks. However, existing theoretical analysis cannot effectively reveal what factors contribute to cross-model transferability. Furthermore, the assumption that the target model dataset is available together with expensive prices of training proxy models also leads to insufficient practicality. We first propose a novel frequency perspective to study the transferability and then identify two factors that impair the transferability: an unchangeable intrinsic difference term along with a controllable perturbation-related term. To enhance the transferability, an optimization task with the constraint that decreases the impact of the perturbation-related term is formulated and an approximate solution for the task is designed to address the intractability of Fourier expansion. To address the second issue, we suggest employing pre-trained models as proxy models, which are freely available. Leveraging these advancements, we introduce cost-effective transfer-based attack (CTA), which addresses the optimization task in pre-trained models. CTA can be unleashed against broad applications, at any time, with minimal effort and nearly zero cost to attackers. This remarkable feature indeed makes CTA an effective, versatile, and fundamental tool for attacking and understanding a wide range of target models, regardless of their architecture or training dataset used. Extensive experiments show impressive attack performance of CTA across various models trained in seven black-box domains, highlighting the broad applicability and effectiveness of CTA.
AB - The transferability of adversarial examples enables practical transfer-based attacks. However, existing theoretical analysis cannot effectively reveal what factors contribute to cross-model transferability. Furthermore, the assumption that the target model dataset is available together with expensive prices of training proxy models also leads to insufficient practicality. We first propose a novel frequency perspective to study the transferability and then identify two factors that impair the transferability: an unchangeable intrinsic difference term along with a controllable perturbation-related term. To enhance the transferability, an optimization task with the constraint that decreases the impact of the perturbation-related term is formulated and an approximate solution for the task is designed to address the intractability of Fourier expansion. To address the second issue, we suggest employing pre-trained models as proxy models, which are freely available. Leveraging these advancements, we introduce cost-effective transfer-based attack (CTA), which addresses the optimization task in pre-trained models. CTA can be unleashed against broad applications, at any time, with minimal effort and nearly zero cost to attackers. This remarkable feature indeed makes CTA an effective, versatile, and fundamental tool for attacking and understanding a wide range of target models, regardless of their architecture or training dataset used. Extensive experiments show impressive attack performance of CTA across various models trained in seven black-box domains, highlighting the broad applicability and effectiveness of CTA.
KW - Adversarial Examples
KW - Black-box Adversarial Attacks
KW - Deep Neural Networks
KW - Transferability
UR - https://www.scopus.com/pages/publications/86000575351
U2 - 10.1145/3680553
DO - 10.1145/3680553
M3 - 文章
AN - SCOPUS:86000575351
SN - 1556-4681
VL - 19
JO - ACM Transactions on Knowledge Discovery from Data
JF - ACM Transactions on Knowledge Discovery from Data
IS - 2
M1 - 52
ER -