TY - GEN
T1 - Semantic consistency for graph representation learning
AU - Huang, Jincheng
AU - Li, Pin
AU - Zhang, Kai
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - In graph learning, it is fundamental to integrate the features from graph structure and node attributes. Towards this end, graph convolution technique has been devised based on the premise that the similarity of node attributes between two nodes is semantically consistent with their topological proximity. However, many real-networks are found to exhibit the semantic inconsistency, i.e., the phenomenon that directly connected nodes are dissimilar in their attributes. This work is concerned with two related issues: how do we quantitatively measure the semantic consistency between node attributes and graph structure? can we leverage this information to facilitate graph representation? To answer those questions, we first introduce a novel metric to evaluate the semantic consistency in a graph, and then we identify a set of key designs to encode the local semantic consistency information into a type of ego's node feature. Then, we fuse this new node feature with the original node attributes by concatenating the two parts using the semantic consistency metric as weight factor. Experiments on real-world datasets show that linear classifier (e.g. multilayer perceptrons) based on our unsupervised feature learning scheme achieves strong performance across the datasets, especially on the datasets with low semantic consistency, compared to the popular supervised GCNs and other competitive unsupervised graph representation learning models.
AB - In graph learning, it is fundamental to integrate the features from graph structure and node attributes. Towards this end, graph convolution technique has been devised based on the premise that the similarity of node attributes between two nodes is semantically consistent with their topological proximity. However, many real-networks are found to exhibit the semantic inconsistency, i.e., the phenomenon that directly connected nodes are dissimilar in their attributes. This work is concerned with two related issues: how do we quantitatively measure the semantic consistency between node attributes and graph structure? can we leverage this information to facilitate graph representation? To answer those questions, we first introduce a novel metric to evaluate the semantic consistency in a graph, and then we identify a set of key designs to encode the local semantic consistency information into a type of ego's node feature. Then, we fuse this new node feature with the original node attributes by concatenating the two parts using the semantic consistency metric as weight factor. Experiments on real-world datasets show that linear classifier (e.g. multilayer perceptrons) based on our unsupervised feature learning scheme achieves strong performance across the datasets, especially on the datasets with low semantic consistency, compared to the popular supervised GCNs and other competitive unsupervised graph representation learning models.
UR - https://www.scopus.com/pages/publications/85140748621
U2 - 10.1109/IJCNN55064.2022.9892167
DO - 10.1109/IJCNN55064.2022.9892167
M3 - 会议稿件
AN - SCOPUS:85140748621
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2022 International Joint Conference on Neural Networks, IJCNN 2022 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 International Joint Conference on Neural Networks, IJCNN 2022
Y2 - 18 July 2022 through 23 July 2022
ER -