TY - GEN
T1 - HiCLRE
T2 - Findings of the Association for Computational Linguistics: ACL 2022
AU - Li, Dongyang
AU - Zhang, Taolin
AU - Hu, Nan
AU - Wang, Chengyu
AU - He, Xiaofeng
N1 - Publisher Copyright:
© 2022 Association for Computational Linguistics.
PY - 2022
Y1 - 2022
N2 - Distant supervision assumes that any sentence containing the same entity pairs reflects identical relationships. Previous works of distantly supervised relation extraction (DSRE) task generally focus on sentence-level or bag-level denoising techniques independently, neglecting the explicit interaction with cross levels. In this paper, we propose a Hierarchical Contrastive Learning Framework for Distantly Supervised Relation Extraction (HiCLRE) to reduce noisy sentences, which integrate the global structural information and local fine-grained interaction. Specifically, we propose a three-level hierarchical learning framework to interact with cross levels, generating the de-noising context-aware representations via adapting the existing multi-head self-attention, named Multi-Granularity Recontextualization. Meanwhile, pseudo positive samples are also provided in the specific level for contrastive learning via a dynamic gradient-based data augmentation strategy, named Dynamic Gradient Adversarial Perturbation. Experiments demonstrate that HiCLRE significantly outperforms strong baselines in various mainstream DSRE datasets.
AB - Distant supervision assumes that any sentence containing the same entity pairs reflects identical relationships. Previous works of distantly supervised relation extraction (DSRE) task generally focus on sentence-level or bag-level denoising techniques independently, neglecting the explicit interaction with cross levels. In this paper, we propose a Hierarchical Contrastive Learning Framework for Distantly Supervised Relation Extraction (HiCLRE) to reduce noisy sentences, which integrate the global structural information and local fine-grained interaction. Specifically, we propose a three-level hierarchical learning framework to interact with cross levels, generating the de-noising context-aware representations via adapting the existing multi-head self-attention, named Multi-Granularity Recontextualization. Meanwhile, pseudo positive samples are also provided in the specific level for contrastive learning via a dynamic gradient-based data augmentation strategy, named Dynamic Gradient Adversarial Perturbation. Experiments demonstrate that HiCLRE significantly outperforms strong baselines in various mainstream DSRE datasets.
UR - https://www.scopus.com/pages/publications/85140422272
U2 - 10.18653/v1/2022.findings-acl.202
DO - 10.18653/v1/2022.findings-acl.202
M3 - 会议稿件
AN - SCOPUS:85140422272
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 2567
EP - 2578
BT - ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Findings of ACL 2022
A2 - Muresan, Smaranda
A2 - Nakov, Preslav
A2 - Villavicencio, Aline
PB - Association for Computational Linguistics (ACL)
Y2 - 22 May 2022 through 27 May 2022
ER -