Coherency Improved Explainable Recommendation via Large Language Model

  • Shijie Liu
  • , Ruixing Ding
  • , Weihai Lu
  • , Jun Wang
  • , Mo Yu
  • , Xiaoming Shi
  • , Wei Zhang*
  • *Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

2 Scopus citations

Abstract

Explainable recommender systems are designed to elucidate the explanation behind each recommendation, enabling users to comprehend the underlying logic. Previous works perform rating prediction and explanation generation in a multi-task manner. However, these works suffer from incoherence between predicted ratings and explanations. To address the issue, we propose a novel framework that employs a large language model (LLM) to generate a rating, transforms it into a rating vector, and finally generates an explanation based on the rating vector and user-item information. Moreover, we propose utilizing publicly available LLMs and pre-trained sentiment analysis models to automatically evaluate the coherence without human annotations. Extensive experimental results on three datasets of explainable recommendation show that the proposed framework is effective, outperforming state-of-the-art baselines with improvements of 7.3% in explainability and 4.4% in text quality.

Original languageEnglish
Pages (from-to)12201-12209
Number of pages9
JournalProceedings of the AAAI Conference on Artificial Intelligence
Volume39
Issue number11
DOIs
StatePublished - 11 Apr 2025
Event39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025 - Philadelphia, United States
Duration: 25 Feb 20254 Mar 2025

Fingerprint

Dive into the research topics of 'Coherency Improved Explainable Recommendation via Large Language Model'. Together they form a unique fingerprint.

Cite this