TY - JOUR
T1 - Sequential citation counts prediction enhanced by dynamic contents
AU - He, Guoxiu
AU - Gu, Sichen
AU - Xue, Zhikai
AU - Duan, Yufeng
AU - Zhu, Xiaomin
N1 - Publisher Copyright:
© 2025 Elsevier Ltd
PY - 2025/5
Y1 - 2025/5
N2 - The assessment of the impact of scholarly publications has garnered significant attention among researchers, particularly in predicting the future sequence of citation counts. However, current studies predominantly regard academic papers as static entities, failing to acknowledge the dynamic nature of their fixed content, which can undergo shifts in focus over time. To this end, we implement dynamic representations of the content to mirror chronological changes within the given paper, facilitating the sequential prediction of citation counts. Specifically, we propose a novel deep neural network called DynamIc Content-aware TrAnsformer (DICTA). The proposed model incorporates a dynamic content module that leverages the power of a sequential module to effectively capture the evolving focus information within each paper. To account for dependencies between the historical and future citation counts, our model utilizes a transformer-based framework as the backbone. With the encoder-decoder structure, it can effectively handle previous citation accumulations and then predict future citation potentials. Extensive experiments conducted on two scientific datasets demonstrate that DICTA achieves impressive performance and outperforms all baseline approaches. Further analyses underscore the significance of the dynamic content module. The code is available: https://github.com/ECNU-Text-Computing/DICTA
AB - The assessment of the impact of scholarly publications has garnered significant attention among researchers, particularly in predicting the future sequence of citation counts. However, current studies predominantly regard academic papers as static entities, failing to acknowledge the dynamic nature of their fixed content, which can undergo shifts in focus over time. To this end, we implement dynamic representations of the content to mirror chronological changes within the given paper, facilitating the sequential prediction of citation counts. Specifically, we propose a novel deep neural network called DynamIc Content-aware TrAnsformer (DICTA). The proposed model incorporates a dynamic content module that leverages the power of a sequential module to effectively capture the evolving focus information within each paper. To account for dependencies between the historical and future citation counts, our model utilizes a transformer-based framework as the backbone. With the encoder-decoder structure, it can effectively handle previous citation accumulations and then predict future citation potentials. Extensive experiments conducted on two scientific datasets demonstrate that DICTA achieves impressive performance and outperforms all baseline approaches. Further analyses underscore the significance of the dynamic content module. The code is available: https://github.com/ECNU-Text-Computing/DICTA
KW - Deep learning
KW - Dynamic content
KW - Sentence-BERT
KW - Sequential citation prediction
UR - https://www.scopus.com/pages/publications/85217398763
U2 - 10.1016/j.joi.2025.101645
DO - 10.1016/j.joi.2025.101645
M3 - 文章
AN - SCOPUS:85217398763
SN - 1751-1577
VL - 19
JO - Journal of Informetrics
JF - Journal of Informetrics
IS - 2
M1 - 101645
ER -