TY - GEN
T1 - Task-Level Self-Supervision for Cross-Domain Few-Shot Learning
AU - Yuan, Wang
AU - Zhang, Zhizhong
AU - Wang, Cong
AU - Song, Haichuan
AU - Xie, Yuan
AU - Ma, Lizhuang
N1 - Publisher Copyright:
Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2022/6/30
Y1 - 2022/6/30
N2 - Learning with limited labeled data is a long-standing problem. Among various solutions, episodic training progressively classifies a series of few-shot tasks and thereby is assumed to be beneficial for improving the model's generalization ability. However, recent studies show that it is even inferior to the baseline model when facing domain shift between base and novel classes. To tackle this problem, we propose a domain-independent task-level self-supervised (TL-SS) method for cross-domain few-shot learning. TL-SS strategy promotes the general idea of label-based instance-level supervision to task-level self-supervision by augmenting multiple views of tasks. Two regularizations on task consistency and correlation metric are introduced to remarkably stabilize the training process and endow the generalization ability into the prediction model. We also propose a high-order associated encoder (HAE) being adaptive to various tasks. By utilizing 3D convolution module, HAE is able to generate proper parameters and enables the encoder to flexibly to any unseen tasks. Two modules complement each other and show great promotion against state-of-the-art methods experimentally. Finally, we design a generalized task-agnostic test, where our intriguing findings highlight the need to re-think the generalization ability of existing few-shot approaches.
AB - Learning with limited labeled data is a long-standing problem. Among various solutions, episodic training progressively classifies a series of few-shot tasks and thereby is assumed to be beneficial for improving the model's generalization ability. However, recent studies show that it is even inferior to the baseline model when facing domain shift between base and novel classes. To tackle this problem, we propose a domain-independent task-level self-supervised (TL-SS) method for cross-domain few-shot learning. TL-SS strategy promotes the general idea of label-based instance-level supervision to task-level self-supervision by augmenting multiple views of tasks. Two regularizations on task consistency and correlation metric are introduced to remarkably stabilize the training process and endow the generalization ability into the prediction model. We also propose a high-order associated encoder (HAE) being adaptive to various tasks. By utilizing 3D convolution module, HAE is able to generate proper parameters and enables the encoder to flexibly to any unseen tasks. Two modules complement each other and show great promotion against state-of-the-art methods experimentally. Finally, we design a generalized task-agnostic test, where our intriguing findings highlight the need to re-think the generalization ability of existing few-shot approaches.
UR - https://www.scopus.com/pages/publications/85147361395
U2 - 10.1609/aaai.v36i3.20230
DO - 10.1609/aaai.v36i3.20230
M3 - 会议稿件
AN - SCOPUS:85147361395
T3 - Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022
SP - 3215
EP - 3223
BT - AAAI-22 Technical Tracks 3
PB - Association for the Advancement of Artificial Intelligence
T2 - 36th AAAI Conference on Artificial Intelligence, AAAI 2022
Y2 - 22 February 2022 through 1 March 2022
ER -