Social rumor detection based on multilayer transformer encoding blocks

Lijun Lin, Zhiyun Chen

Research output: Contribution to journalArticlepeer-review

15 Scopus citations

Abstract

The propagation of rumors on social media has been identified as a critical problem in recent years that causes social panic or economic turmoil (to some extent), thereby giving rise to the need for faster identification. With the advancements in deep learning, researches based on neural networks become popular. Most of the existing methods extensively adopt recurrent neural networks (RNNs), such as gated recurrent unit and long short-term memory. This results in a significant degradation in the concurrency performance of the models, implying an increased consumption of time and resources. This study proposes a model with multilayer transformer encoding blocks for detecting rumors. The self-attention mechanism in the transformer encoding blocks provides better concurrency to the proposed model and improves its performance. The proposed model performs faster executions using less or no recurrences in comparison to other models based on RNNs. Experiments on two real social datasets verified that our model could achieve significantly better results than baseline methods. The accuracy rate increased by 1.1% and 2.2% on the Weibo and PHEME datasets, respectively. In comparison to the methods that use RNNs as a feature extractor, the training duration in the proposed model was reduced by 16% and 70% in Weibo and PHEME, respectively.

Original languageEnglish
Article numbere6083
JournalConcurrency and Computation: Practice and Experience
Volume33
Issue number6
DOIs
StatePublished - 25 Mar 2021

Keywords

  • rumor detection
  • semantic feature extraction
  • transformer

Fingerprint

Dive into the research topics of 'Social rumor detection based on multilayer transformer encoding blocks'. Together they form a unique fingerprint.

Cite this