Training an LLM-as-a-Judge Model: Pipeline, Insights, and Practical Lessons

  • Renjun Hu*
  • , Yi Cheng
  • , Libin Meng
  • , Jiaxin Xia
  • , Yi Zong
  • , Xing Shi
  • , Wei Lin
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

6 Scopus citations

Abstract

The rapid advancement of large language models (LLMs) has opened new possibilities for their adoption as evaluative judges. This paper introduces Themis, a fine-tuned LLM judge that delivers sophisticated context-aware evaluations. We provide a comprehensive overview of the development pipeline for Themis, highlighting its scenario-dependent evaluation prompts and two novel methods for controlled instruction generation. These designs enable Themis to effectively distill evaluative skills from teacher models, while retaining flexibility for continuous development. We introduce two human-labeled benchmarks for meta-evaluation, demonstrating that Themis can achieve high alignment with human preferences in an economical manner. Additionally, we explore insights into the LLM-as-a-judge paradigm, revealing nuances in performance and the varied effects of reference answers. Notably, we observe that pure knowledge distillation from strong LLMs, though common, does not guarantee performance improvement through scaling. We propose a mitigation strategy based on instruction-following difficulty. Furthermore, we provide practical guidelines covering data balancing, prompt customization, multi-objective training, and metric aggregation. We aim for our method and findings, along with the fine-tuning data, benchmarks, and model checkpoints, to support future research and development in this area.

Original languageEnglish
Title of host publicationWWW Companion 2025 - Companion Proceedings of the ACM Web Conference 2025
PublisherAssociation for Computing Machinery, Inc
Pages228-237
Number of pages10
ISBN (Electronic)9798400713316
DOIs
StatePublished - 23 May 2025
Event34th ACM Web Conference, WWW Companion 2025 - Sydney, Australia
Duration: 28 Apr 20252 May 2025

Publication series

NameWWW Companion 2025 - Companion Proceedings of the ACM Web Conference 2025

Conference

Conference34th ACM Web Conference, WWW Companion 2025
Country/TerritoryAustralia
CitySydney
Period28/04/252/05/25

Keywords

  • LLM evaluation
  • LLM-as-a-judge
  • Large language models

Fingerprint

Dive into the research topics of 'Training an LLM-as-a-Judge Model: Pipeline, Insights, and Practical Lessons'. Together they form a unique fingerprint.

Cite this