TY - GEN
T1 - Multi-Scale Distribution Deep Variational Autoencoder for Explanation Generation
AU - Cai, Ze Feng
AU - Wang, Linlin
AU - de Melo, Gerard
AU - Sun, Fei
AU - He, Liang
N1 - Publisher Copyright:
© 2022 Association for Computational Linguistics.
PY - 2022
Y1 - 2022
N2 - Generating explanations for recommender systems is essential for improving their transparency, as users often wish to understand the reason for receiving a specified recommendation. Previous methods mainly focus on improving the generation quality, but often produce generic explanations that fail to incorporate specific details of user and item. To resolve this problem, we present Multi-Scale Distribution Deep Variational Autoencoders (MVAE). A deep hierarchical VAE with a prior network that eliminates noise while retaining meaningful signals in the input, coupled with a recognition network serving as the source of information to guide the learning of the prior network. Further, the Multi-scale distribution Learning Framework (MLF) along with a Target Tracking Kullback-Leibler divergence (TKL) mechanism are proposed to employ multiple KL divergences at different scales for more effective learning. Extensive empirical experiments demonstrate that our methods can generate explanations with concrete input-specific contents.
AB - Generating explanations for recommender systems is essential for improving their transparency, as users often wish to understand the reason for receiving a specified recommendation. Previous methods mainly focus on improving the generation quality, but often produce generic explanations that fail to incorporate specific details of user and item. To resolve this problem, we present Multi-Scale Distribution Deep Variational Autoencoders (MVAE). A deep hierarchical VAE with a prior network that eliminates noise while retaining meaningful signals in the input, coupled with a recognition network serving as the source of information to guide the learning of the prior network. Further, the Multi-scale distribution Learning Framework (MLF) along with a Target Tracking Kullback-Leibler divergence (TKL) mechanism are proposed to employ multiple KL divergences at different scales for more effective learning. Extensive empirical experiments demonstrate that our methods can generate explanations with concrete input-specific contents.
UR - https://www.scopus.com/pages/publications/85135758080
U2 - 10.18653/v1/2022.findings-acl.7
DO - 10.18653/v1/2022.findings-acl.7
M3 - 会议稿件
AN - SCOPUS:85135758080
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 68
EP - 78
BT - ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Findings of ACL 2022
A2 - Muresan, Smaranda
A2 - Nakov, Preslav
A2 - Villavicencio, Aline
PB - Association for Computational Linguistics (ACL)
T2 - Findings of the Association for Computational Linguistics: ACL 2022
Y2 - 22 May 2022 through 27 May 2022
ER -