TY - GEN
T1 - DiaHalu
T2 - 2024 Findings of the Association for Computational Linguistics, EMNLP 2024
AU - Chen, Kedi
AU - Chen, Qin
AU - Zhou, Jie
AU - He, Yishen
AU - He, Liang
N1 - Publisher Copyright:
© 2024 Association for Computational Linguistics.
PY - 2024
Y1 - 2024
N2 - Though large language models (LLMs) achieve significant success in recent years, the hallucination issue remains a challenge, and numerous benchmarks are proposed for hallucination detection.Nevertheless, some of these benchmarks are not naturally generated by LLMs but are intentionally induced.Also, many merely focus on the factuality hallucination while ignoring the faithfulness hallucination.Additionally, although dialogue pattern is more widely utilized in the era of LLMs, current benchmarks only concentrate on sentence-level and passage-level hallucination.In this study, we propose DiaHalu, the first dedicated dialogue-level hallucination evaluation benchmark for LLMs to our knowledge.Initially, we integrate the collected topics into system prompts and facilitate a dialogue between two LLMs.Subsequently, we manually modify the contents that do not adhere to human language conventions and then have LLMs re-generate, simulating authentic human-machine interaction scenarios.Finally, professional scholars annotate all the samples in the dataset.DiaHalu covers four common multi-turn dialogue domains and five hallucination subtypes, extended from factuality and faithfulness hallucination.Experiments with the well-known LLMs and detection methods show that DiaHalu is a challenging benchmark, holding significant values for further research.
AB - Though large language models (LLMs) achieve significant success in recent years, the hallucination issue remains a challenge, and numerous benchmarks are proposed for hallucination detection.Nevertheless, some of these benchmarks are not naturally generated by LLMs but are intentionally induced.Also, many merely focus on the factuality hallucination while ignoring the faithfulness hallucination.Additionally, although dialogue pattern is more widely utilized in the era of LLMs, current benchmarks only concentrate on sentence-level and passage-level hallucination.In this study, we propose DiaHalu, the first dedicated dialogue-level hallucination evaluation benchmark for LLMs to our knowledge.Initially, we integrate the collected topics into system prompts and facilitate a dialogue between two LLMs.Subsequently, we manually modify the contents that do not adhere to human language conventions and then have LLMs re-generate, simulating authentic human-machine interaction scenarios.Finally, professional scholars annotate all the samples in the dataset.DiaHalu covers four common multi-turn dialogue domains and five hallucination subtypes, extended from factuality and faithfulness hallucination.Experiments with the well-known LLMs and detection methods show that DiaHalu is a challenging benchmark, holding significant values for further research.
UR - https://www.scopus.com/pages/publications/85217618244
U2 - 10.18653/v1/2024.findings-emnlp.529
DO - 10.18653/v1/2024.findings-emnlp.529
M3 - 会议稿件
AN - SCOPUS:85217618244
T3 - EMNLP 2024 - 2024 Conference on Empirical Methods in Natural Language Processing, Findings of EMNLP 2024
SP - 9057
EP - 9079
BT - EMNLP 2024 - 2024 Conference on Empirical Methods in Natural Language Processing, Findings of EMNLP 2024
A2 - Al-Onaizan, Yaser
A2 - Bansal, Mohit
A2 - Chen, Yun-Nung
PB - Association for Computational Linguistics (ACL)
Y2 - 12 November 2024 through 16 November 2024
ER -