TY - GEN
T1 - DisCo
T2 - 17th European Conference on Computer Vision, ECCV 2022
AU - Gao, Yuting
AU - Zhuang, Jia Xin
AU - Lin, Shaohui
AU - Cheng, Hao
AU - Sun, Xing
AU - Li, Ke
AU - Shen, Chunhua
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - While Self-Supervised Learning (SSL) has received widespread attention from the community, recent researches argue that its performance often suffers a cliff fall when the model size decreases. Since current SSL methods mainly rely on contrastive learning to train the network, we propose a simple yet effective method termed Distilled Contrastive Learning (DisCo) to ease this issue. Specifically, we find that the final inherent embedding of the mainstream SSL methods contains the most important information, and propose to distill the final embedding to maximally transmit a teacher’s knowledge to a lightweight model by constraining the last embedding of the student to be consistent with that of the teacher. In addition, we find that there exists a phenomenon termed Distilling BottleNeck and propose to enlarge the embedding dimension to alleviate this problem. Since the MLP only exists during the SSL phase, our method does not introduce any extra parameters to lightweight models for the downstream task deployment. Experimental results demonstrate that our method surpasses the state-of-the-art on many lightweight models by a large margin. Particularly, when ResNet-101/ResNet-50 is used respectively as a teacher to teach EfficientNet-B0, the linear result of EfficientNet-B0 on ImageNet is improved by 22.1% and 19.7%, respectively, which is very close to ResNet-101/ResNet-50 with much fewer parameters. Code is available at https://github.com/Yuting-Gao/DisCo-pytorch.
AB - While Self-Supervised Learning (SSL) has received widespread attention from the community, recent researches argue that its performance often suffers a cliff fall when the model size decreases. Since current SSL methods mainly rely on contrastive learning to train the network, we propose a simple yet effective method termed Distilled Contrastive Learning (DisCo) to ease this issue. Specifically, we find that the final inherent embedding of the mainstream SSL methods contains the most important information, and propose to distill the final embedding to maximally transmit a teacher’s knowledge to a lightweight model by constraining the last embedding of the student to be consistent with that of the teacher. In addition, we find that there exists a phenomenon termed Distilling BottleNeck and propose to enlarge the embedding dimension to alleviate this problem. Since the MLP only exists during the SSL phase, our method does not introduce any extra parameters to lightweight models for the downstream task deployment. Experimental results demonstrate that our method surpasses the state-of-the-art on many lightweight models by a large margin. Particularly, when ResNet-101/ResNet-50 is used respectively as a teacher to teach EfficientNet-B0, the linear result of EfficientNet-B0 on ImageNet is improved by 22.1% and 19.7%, respectively, which is very close to ResNet-101/ResNet-50 with much fewer parameters. Code is available at https://github.com/Yuting-Gao/DisCo-pytorch.
KW - Distillation
KW - Self-supervised learning
UR - https://www.scopus.com/pages/publications/85142682433
U2 - 10.1007/978-3-031-19809-0_14
DO - 10.1007/978-3-031-19809-0_14
M3 - 会议稿件
AN - SCOPUS:85142682433
SN - 9783031198083
T3 - Lecture Notes in Computer Science
SP - 237
EP - 253
BT - Computer Vision – ECCV 2022 - 17th European Conference, Proceedings
A2 - Avidan, Shai
A2 - Brostow, Gabriel
A2 - Cissé, Moustapha
A2 - Farinella, Giovanni Maria
A2 - Hassner, Tal
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 23 October 2022 through 27 October 2022
ER -