TY - GEN
T1 - Dual-Encoder Attention Fusion Model for Aspect Sentiment Triplet Extraction
AU - Zhang, Yunqi
AU - Li, Songda
AU - Lan, Yuquan
AU - Zhao, Hui
AU - Zhao, Gang
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Aspect sentiment triplet extraction (ASTE) is a crucial sub-task of aspect-based sentiment analysis, which aims to extract each aspect term along with its opinion term and sentiment polarity. Prior works accomplish ASTE by jointly modeling its two sub-tasks, i.e., term extraction and sentiment classification. However, they ignore that different features have different importance to the two sub-tasks, resulting in feature confusion and insufficient feature fusion. To address this, we propose a dual-encoder attention fusion model (DuaIAF) for ASTE, consisting of a term extraction module and a sentiment classification module. First, we adopt a grid tagging scheme to model word-to-word interactions within word pairs. Second, we employ a dual-encoder framework to obtain BERT-style grid multi-features for term extraction and contextualized features for sentiment classification, thus alleviating feature confusion. Third, deep fusion networks are applied to refine word-level and span-level features. A convolution neural network (CNN)-based self-attention network deeply fuses word-level grid multi-features to explore the 2D structure information and long-distance dependency information. Moreover, attention pooling aggregates contextualized features into span-level features, which helps capture span-to-span interactions between aspect term spans and opinion term spans. The experimental results show that our model outperforms previous state-of-the-art methods over 4 English and 2 Chinese datasets in various domains.
AB - Aspect sentiment triplet extraction (ASTE) is a crucial sub-task of aspect-based sentiment analysis, which aims to extract each aspect term along with its opinion term and sentiment polarity. Prior works accomplish ASTE by jointly modeling its two sub-tasks, i.e., term extraction and sentiment classification. However, they ignore that different features have different importance to the two sub-tasks, resulting in feature confusion and insufficient feature fusion. To address this, we propose a dual-encoder attention fusion model (DuaIAF) for ASTE, consisting of a term extraction module and a sentiment classification module. First, we adopt a grid tagging scheme to model word-to-word interactions within word pairs. Second, we employ a dual-encoder framework to obtain BERT-style grid multi-features for term extraction and contextualized features for sentiment classification, thus alleviating feature confusion. Third, deep fusion networks are applied to refine word-level and span-level features. A convolution neural network (CNN)-based self-attention network deeply fuses word-level grid multi-features to explore the 2D structure information and long-distance dependency information. Moreover, attention pooling aggregates contextualized features into span-level features, which helps capture span-to-span interactions between aspect term spans and opinion term spans. The experimental results show that our model outperforms previous state-of-the-art methods over 4 English and 2 Chinese datasets in various domains.
KW - aspect sentiment triplet extraction
KW - deep fusion network
KW - dual-encoder framework
KW - grid tagging
UR - https://www.scopus.com/pages/publications/85169602444
U2 - 10.1109/IJCNN54540.2023.10191487
DO - 10.1109/IJCNN54540.2023.10191487
M3 - 会议稿件
AN - SCOPUS:85169602444
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - IJCNN 2023 - International Joint Conference on Neural Networks, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 International Joint Conference on Neural Networks, IJCNN 2023
Y2 - 18 June 2023 through 23 June 2023
ER -