TreeEval: Benchmark-Free Evaluation of Large Language Models through Tree Planning

  • Xiang Li
  • , Yunshi Lan*
  • , Chao Yang
  • *Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

Recently, numerous new benchmarks have been established to evaluate the performance of large language models (LLMs) via either computing a holistic score or employing another LLM as a judge. However, these approaches suffer from data leakage due to the open access of the benchmark and inflexible evaluation process. To address this issue, we introduce TreeEval, a benchmark-free evaluation method for LLMs that let a high-performance LLM host an irreproducible evaluation session and essentially avoids the data leakage. Moreover, this LLM performs as an examiner to raise up a series of questions under a topic with a tree planing strategy, which considers the current evaluation status to decide the next question generation and ensures the completeness and efficiency of the evaluation process. We evaluate 6 models of different parameter sizes, including 7B, 13B, and 33B, and ultimately achieved the highest correlation coefficient with AlpacaEval2.0 using only around 45 questions. We also conduct more analysis to show the robustness and reliability of TreeEval.

Original languageEnglish
Pages (from-to)24485-24493
Number of pages9
JournalProceedings of the AAAI Conference on Artificial Intelligence
Volume39
Issue number23
DOIs
StatePublished - 11 Apr 2025
Event39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025 - Philadelphia, United States
Duration: 25 Feb 20254 Mar 2025

Fingerprint

Dive into the research topics of 'TreeEval: Benchmark-Free Evaluation of Large Language Models through Tree Planning'. Together they form a unique fingerprint.

Cite this