TY - GEN
T1 - Triple-Feature Transformer with Sparsity Regularization
AU - Zhou, Xun
AU - Song, Haichuan
AU - Kang, Kai
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/9/21
Y1 - 2025/9/21
N2 - The RecSys Challenge 2025 focuses on developing robust recommendation systems capable of generalizing across multiple tasks in a highly sparse and dynamic user-behavior dataset. Designing unified user representations that generalize across multiple recommendation tasks under extreme sparsity remains a key challenge. In this paper, we present TFT-SR (Triple-Feature Transformer with Sparsity Regularization), a unified framework for behavioral modeling in the RecSys Challenge 2025. Our method fuses three complementary types of features - statistical descriptors, quantized temporal patterns, and hashed high-cardinality IDs - into a unified user vector. A dual-path neural encoder is used to separately extract dense and sparse representations, with the sparse branch regularized by an L1 penalty to promote interpretability and efficiency. Multi-task optimization [13, 15, 18] is performed through loss weighting, ensuring balanced learning across tasks. we participated in the competition under the team name 'xunzhou,' achieving 11th place on the final leaderboard and 5th place on the academic leaderboard. Keywords: TFT-SR, universal user representation, transformer, sparse regularization, multi-task learning The source code of this project is open-sourced on GitHub: https://github.com/fenglenchiqing/RecSys2025-TFT-SR.git
AB - The RecSys Challenge 2025 focuses on developing robust recommendation systems capable of generalizing across multiple tasks in a highly sparse and dynamic user-behavior dataset. Designing unified user representations that generalize across multiple recommendation tasks under extreme sparsity remains a key challenge. In this paper, we present TFT-SR (Triple-Feature Transformer with Sparsity Regularization), a unified framework for behavioral modeling in the RecSys Challenge 2025. Our method fuses three complementary types of features - statistical descriptors, quantized temporal patterns, and hashed high-cardinality IDs - into a unified user vector. A dual-path neural encoder is used to separately extract dense and sparse representations, with the sparse branch regularized by an L1 penalty to promote interpretability and efficiency. Multi-task optimization [13, 15, 18] is performed through loss weighting, ensuring balanced learning across tasks. we participated in the competition under the team name 'xunzhou,' achieving 11th place on the final leaderboard and 5th place on the academic leaderboard. Keywords: TFT-SR, universal user representation, transformer, sparse regularization, multi-task learning The source code of this project is open-sourced on GitHub: https://github.com/fenglenchiqing/RecSys2025-TFT-SR.git
UR - https://www.scopus.com/pages/publications/105020573615
U2 - 10.1145/3758126.3758138
DO - 10.1145/3758126.3758138
M3 - 会议稿件
AN - SCOPUS:105020573615
T3 - Proceedings of the Workshop on the ACM RecSys Challenge 2025
SP - 56
EP - 60
BT - Proceedings of the Workshop on the ACM RecSys Challenge 2025
PB - Association for Computing Machinery, Inc
T2 - Workshop on the 19th ACM Conference on Recommender Systems, RecSysChallenge 2025
Y2 - 22 September 2025 through 26 September 2025
ER -