TY - GEN
T1 - CACV-tree
T2 - 2019 International Conference on Big Data Engineering, BDE 2019
AU - Wang, Jingwei
AU - Hu, Wenxin
AU - Wu, Wen
N1 - Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019/6/11
Y1 - 2019/6/11
N2 - Sentence similarity modeling plays an important role in Natural Language Processing (NLP) tasks, and thus has received much attention. In recent years, due to the success of word embedding, the neural network method has achieved sentence embedding, obtaining attractive performance. Nevertheless, most of them focused on learning semantic information and modeling it as a continuous vector, while the syntactic information of sentences has not been fully exploited. On the other hand, prior works have shown the benefits of structured trees that include syntactic information, while few methods in this branch utilized the advantages of sentence compression. This paper makes the first attempt to absorb their advantages by merging these techniques in a unified structure, dubbed as CACV-tree (Compression Attention Constituency Vector-tree). The experimental results, based on 14 widely used datasets, demonstrate that our model is effective and competitive, compared against state-of-the-art models.
AB - Sentence similarity modeling plays an important role in Natural Language Processing (NLP) tasks, and thus has received much attention. In recent years, due to the success of word embedding, the neural network method has achieved sentence embedding, obtaining attractive performance. Nevertheless, most of them focused on learning semantic information and modeling it as a continuous vector, while the syntactic information of sentences has not been fully exploited. On the other hand, prior works have shown the benefits of structured trees that include syntactic information, while few methods in this branch utilized the advantages of sentence compression. This paper makes the first attempt to absorb their advantages by merging these techniques in a unified structure, dubbed as CACV-tree (Compression Attention Constituency Vector-tree). The experimental results, based on 14 widely used datasets, demonstrate that our model is effective and competitive, compared against state-of-the-art models.
KW - Attention weighting mechanism
KW - Semantic information
KW - Sentence compression
KW - Sentence similarity
KW - Syntactic structure
UR - https://www.scopus.com/pages/publications/85071533873
U2 - 10.1145/3341620.3341627
DO - 10.1145/3341620.3341627
M3 - 会议稿件
AN - SCOPUS:85071533873
T3 - ACM International Conference Proceeding Series
SP - 79
EP - 84
BT - Proceedings of the 2019 International Conference on Big Data Engineering, BDE 2019
PB - Association for Computing Machinery
Y2 - 11 June 2019 through 13 June 2019
ER -