TY - GEN
T1 - Learning Extremely Lightweight and Robust Model with Differentiable Constraints on Sparsity and Condition Number
AU - Wei, Xian
AU - Xu, Yangyu
AU - Huang, Yanhui
AU - Lv, Hairong
AU - Lan, Hai
AU - Chen, Mingsong
AU - Tang, Xuan
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - Learning lightweight and robust deep learning models is an enormous challenge for safety-critical devices with limited computing and memory resources, owing to robustness against adversarial attacks being proportional to network capacity. The community has extensively explored the integration of adversarial training and model compression, such as weight pruning. However, lightweight models generated by highly pruned over-parameterized models lead to sharp drops in both robust and natural accuracy. It has been observed that the parameters of these models lie in ill-conditioned weight space, i.e., the condition number of weight matrices tend to be large enough that the model is not robust. In this work, we propose a framework for building extremely lightweight models, which combines tensor product with the differentiable constraints for reducing condition number and promoting sparsity. Moreover, the proposed framework is incorporated into adversarial training with the min-max optimization scheme. We evaluate the proposed approach on VGG-16 and Visual Transformer. Experimental results on datasets such as ImageNet, SVHN, and CIFAR - 10 show that we can achieve an overwhelming advantage at a high compression ratio, e.g., 200 times.
AB - Learning lightweight and robust deep learning models is an enormous challenge for safety-critical devices with limited computing and memory resources, owing to robustness against adversarial attacks being proportional to network capacity. The community has extensively explored the integration of adversarial training and model compression, such as weight pruning. However, lightweight models generated by highly pruned over-parameterized models lead to sharp drops in both robust and natural accuracy. It has been observed that the parameters of these models lie in ill-conditioned weight space, i.e., the condition number of weight matrices tend to be large enough that the model is not robust. In this work, we propose a framework for building extremely lightweight models, which combines tensor product with the differentiable constraints for reducing condition number and promoting sparsity. Moreover, the proposed framework is incorporated into adversarial training with the min-max optimization scheme. We evaluate the proposed approach on VGG-16 and Visual Transformer. Experimental results on datasets such as ImageNet, SVHN, and CIFAR - 10 show that we can achieve an overwhelming advantage at a high compression ratio, e.g., 200 times.
KW - Adversarial robustness
KW - Condition number
KW - Convolutional neural networks
KW - Lightweight model
KW - Tensor product
KW - Visual transformer
UR - https://www.scopus.com/pages/publications/85142704243
U2 - 10.1007/978-3-031-19772-7_40
DO - 10.1007/978-3-031-19772-7_40
M3 - 会议稿件
AN - SCOPUS:85142704243
SN - 9783031197710
T3 - Lecture Notes in Computer Science
SP - 690
EP - 707
BT - Computer Vision – ECCV 2022 - 17th European Conference, Proceedings
A2 - Avidan, Shai
A2 - Brostow, Gabriel
A2 - Cissé, Moustapha
A2 - Farinella, Giovanni Maria
A2 - Hassner, Tal
PB - Springer Science and Business Media Deutschland GmbH
T2 - 17th European Conference on Computer Vision, ECCV 2022
Y2 - 23 October 2022 through 27 October 2022
ER -