跳到主要导航 跳到搜索 跳到主要内容

TreeEval: Benchmark-Free Evaluation of Large Language Models through Tree Planning

  • Xiang Li
  • , Yunshi Lan*
  • , Chao Yang
  • *此作品的通讯作者
  • East China Normal University
  • Shanghai AI Laboratory

科研成果: 期刊稿件会议文章同行评审

摘要

Recently, numerous new benchmarks have been established to evaluate the performance of large language models (LLMs) via either computing a holistic score or employing another LLM as a judge. However, these approaches suffer from data leakage due to the open access of the benchmark and inflexible evaluation process. To address this issue, we introduce TreeEval, a benchmark-free evaluation method for LLMs that let a high-performance LLM host an irreproducible evaluation session and essentially avoids the data leakage. Moreover, this LLM performs as an examiner to raise up a series of questions under a topic with a tree planing strategy, which considers the current evaluation status to decide the next question generation and ensures the completeness and efficiency of the evaluation process. We evaluate 6 models of different parameter sizes, including 7B, 13B, and 33B, and ultimately achieved the highest correlation coefficient with AlpacaEval2.0 using only around 45 questions. We also conduct more analysis to show the robustness and reliability of TreeEval.

源语言英语
页(从-至)24485-24493
页数9
期刊Proceedings of the AAAI Conference on Artificial Intelligence
39
23
DOI
出版状态已出版 - 11 4月 2025
活动39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025 - Philadelphia, 美国
期限: 25 2月 20254 3月 2025

指纹

探究 'TreeEval: Benchmark-Free Evaluation of Large Language Models through Tree Planning' 的科研主题。它们共同构成独一无二的指纹。

引用此