TY - GEN
T1 - PSP
T2 - European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2024
AU - Ge, Qingqing
AU - Zhao, Zeyuan
AU - Liu, Yiding
AU - Cheng, Anfeng
AU - Li, Xiang
AU - Wang, Shuaiqiang
AU - Yin, Dawei
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
PY - 2024
Y1 - 2024
N2 - Graph Neural Networks (GNNs) are powerful in learning semantics of graph data. Recently, a new paradigm “pre-train & prompt” has shown promising results in adapting GNNs to various tasks with less supervised data. The success of such paradigm can be attributed to the more consistent objectives of pre-training and task-oriented prompt tuning, where the pre-trained knowledge can be effectively transferred to downstream tasks. Most existing methods are based on the class prototype vector framework. However, in the few-shot scenarios, given few labeled data, class prototype vectors are difficult to be accurately constructed or learned. Meanwhile, the structure information of graph is usually exploited during pre-training for learning node representations, while neglected in the prompt tuning stage for learning more accurate prototype vectors. In addition, they generally ignore the impact of heterophilous neighborhoods on node representation and are not suitable for heterophilous graphs. To bridge these gaps, we propose a novel pre-training and structure prompt tuning framework for GNNs, namely PSP, which consistently exploits structure information in both pre-training and prompt tuning stages. In particular, PSP 1) employs a dual-view contrastive learning to align the latent semantic spaces of node attributes and graph structure, and 2) incorporates structure information in prompted graph to construct more accurate prototype vectors and elicit more pre-trained knowledge in prompt tuning. We conduct extensive experiments on node classification and graph classification tasks to evaluate the effectiveness of PSP. We show that PSP can lead to superior performance in few-shot scenarios on both homophilous and heterophilous graphs. The implemented code is available at https://github.com/gqq1210/PSP.
AB - Graph Neural Networks (GNNs) are powerful in learning semantics of graph data. Recently, a new paradigm “pre-train & prompt” has shown promising results in adapting GNNs to various tasks with less supervised data. The success of such paradigm can be attributed to the more consistent objectives of pre-training and task-oriented prompt tuning, where the pre-trained knowledge can be effectively transferred to downstream tasks. Most existing methods are based on the class prototype vector framework. However, in the few-shot scenarios, given few labeled data, class prototype vectors are difficult to be accurately constructed or learned. Meanwhile, the structure information of graph is usually exploited during pre-training for learning node representations, while neglected in the prompt tuning stage for learning more accurate prototype vectors. In addition, they generally ignore the impact of heterophilous neighborhoods on node representation and are not suitable for heterophilous graphs. To bridge these gaps, we propose a novel pre-training and structure prompt tuning framework for GNNs, namely PSP, which consistently exploits structure information in both pre-training and prompt tuning stages. In particular, PSP 1) employs a dual-view contrastive learning to align the latent semantic spaces of node attributes and graph structure, and 2) incorporates structure information in prompted graph to construct more accurate prototype vectors and elicit more pre-trained knowledge in prompt tuning. We conduct extensive experiments on node classification and graph classification tasks to evaluate the effectiveness of PSP. We show that PSP can lead to superior performance in few-shot scenarios on both homophilous and heterophilous graphs. The implemented code is available at https://github.com/gqq1210/PSP.
KW - Few-shot
KW - Graph Neural Networks
KW - Pre-training
KW - Prompt
UR - https://www.scopus.com/pages/publications/105023422655
U2 - 10.1007/978-3-031-70362-1_25
DO - 10.1007/978-3-031-70362-1_25
M3 - 会议稿件
AN - SCOPUS:105023422655
SN - 9783031703614
T3 - Lecture Notes in Computer Science
SP - 423
EP - 439
BT - Machine Learning and Knowledge Discovery in Databases. Research Track - European Conference, ECML PKDD 2024, Proceedings
A2 - Bifet, Albert
A2 - Davis, Jesse
A2 - Krilavicius, Tomas
A2 - Kull, Meelis
A2 - Ntoutsi, Eirini
A2 - Žliobaite, Indre
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 9 September 2024 through 13 September 2024
ER -