TY - JOUR
T1 - EvoContext
T2 - Evolving Contextual Examples by Genetic Algorithm for Enhanced Hyperparameter Optimization Capability in Large Language Models
AU - Xu, Yutian
AU - Qin, Guozhong
AU - Wang, Yanhao
AU - Chen, Panfeng
AU - Wang, Xibin
AU - Zhou, Wei
AU - Chen, Mei
AU - Li, Hui
N1 - Publisher Copyright:
© 2025 by the authors.
PY - 2025/6
Y1 - 2025/6
N2 - Hyperparameter Optimization (HPO) is an important and challenging problem in machine learning. Traditional HPO methods require substantial evaluations to search for superior configurations. Recent Large Language Model (LLM)-based approaches leverage domain knowledge and few-shot learning proficiency to discover promising configurations with minimal human effort. However, the repetition issues causes LLMs to generate configurations similar to context examples, which may confine the optimization process to local regions. Moreover, since LLMs rely on the examples they generate for a few-shot learning, a self-reinforcing loop is formed, hindering LLMs from escaping local optima. In this work, we propose EvoContext, which aims to intentionally generate configurations that differ significantly from examples via external interventions and actively breaks the self-reinforcing effect for a more efficient approximation of the global optimum. Our EvoContext method involves two phases: (i) initial example generation through cold or warm starting and (ii) iterative optimization that integrates genetic operations for updating examples to enhance global exploration capabilities. Additionally, it employs LLMs in-context learning to generate configurations based on competitive examples for local refinement. Experiments on several real-world datasets show that EvoContext outperforms traditional and other LLM-driven approaches on HPO.
AB - Hyperparameter Optimization (HPO) is an important and challenging problem in machine learning. Traditional HPO methods require substantial evaluations to search for superior configurations. Recent Large Language Model (LLM)-based approaches leverage domain knowledge and few-shot learning proficiency to discover promising configurations with minimal human effort. However, the repetition issues causes LLMs to generate configurations similar to context examples, which may confine the optimization process to local regions. Moreover, since LLMs rely on the examples they generate for a few-shot learning, a self-reinforcing loop is formed, hindering LLMs from escaping local optima. In this work, we propose EvoContext, which aims to intentionally generate configurations that differ significantly from examples via external interventions and actively breaks the self-reinforcing effect for a more efficient approximation of the global optimum. Our EvoContext method involves two phases: (i) initial example generation through cold or warm starting and (ii) iterative optimization that integrates genetic operations for updating examples to enhance global exploration capabilities. Additionally, it employs LLMs in-context learning to generate configurations based on competitive examples for local refinement. Experiments on several real-world datasets show that EvoContext outperforms traditional and other LLM-driven approaches on HPO.
KW - few-shot learning
KW - genetic algorithm
KW - hyperparameter optimization
KW - large language model
UR - https://www.scopus.com/pages/publications/105007792388
U2 - 10.3390/electronics14112253
DO - 10.3390/electronics14112253
M3 - 文章
AN - SCOPUS:105007792388
SN - 2079-9292
VL - 14
JO - Electronics (Switzerland)
JF - Electronics (Switzerland)
IS - 11
M1 - 2253
ER -