TY - GEN
T1 - Fine-Grained Derivative-Free Simultaneous Optimistic Optimization with Local Gaussian Process
AU - Song, Junhao
AU - Zhang, Yangwenhui
AU - Qian, Hong
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Derivative-free optimization has achieved remarkable success across a variety of applications where the explicit formulation of an objective function is inaccessible. Learning an accurate surrogate model from solutions and their function values is crucial for derivative-free optimization. Methods for constructing global surrogate models, such as Bayesian optimization (BO), encounter the challenge of high learning cost, which impairs optimization efficiency. Splitting the entire search domain into smaller regions, a series of domain partition methods are proposed, like simultaneous optimistic optimization (SOO). It has demonstrated notable effectiveness in derivative-free optimization but still has room for improvement due to its relatively coarse-grained partition strategy. To this end, this paper proposes a fine-grained simultaneous optimistic optimization (FGSOO) method with local Gaussian process. Specifically, FGSOO designs a fine-grained partition strategy to endow SOO with the capability of cross-height comparison, and utilizes local Gaussian process to make nodes' potential more representative, so as to reduce the required number of solutions for learning surrogate models. Compared with BO, FGSOO reduces the learning cost. Meanwhile, compared with SOO, FGSOO could avoid unnecessary partition. The experimental results on real-world tasks, such as trajectory optimization and molecule substructure optimization, verify that FGSOO surpasses the compared methods in improving efficiency while maintaining effectiveness.
AB - Derivative-free optimization has achieved remarkable success across a variety of applications where the explicit formulation of an objective function is inaccessible. Learning an accurate surrogate model from solutions and their function values is crucial for derivative-free optimization. Methods for constructing global surrogate models, such as Bayesian optimization (BO), encounter the challenge of high learning cost, which impairs optimization efficiency. Splitting the entire search domain into smaller regions, a series of domain partition methods are proposed, like simultaneous optimistic optimization (SOO). It has demonstrated notable effectiveness in derivative-free optimization but still has room for improvement due to its relatively coarse-grained partition strategy. To this end, this paper proposes a fine-grained simultaneous optimistic optimization (FGSOO) method with local Gaussian process. Specifically, FGSOO designs a fine-grained partition strategy to endow SOO with the capability of cross-height comparison, and utilizes local Gaussian process to make nodes' potential more representative, so as to reduce the required number of solutions for learning surrogate models. Compared with BO, FGSOO reduces the learning cost. Meanwhile, compared with SOO, FGSOO could avoid unnecessary partition. The experimental results on real-world tasks, such as trajectory optimization and molecule substructure optimization, verify that FGSOO surpasses the compared methods in improving efficiency while maintaining effectiveness.
UR - https://www.scopus.com/pages/publications/85217881599
U2 - 10.1109/SMC54092.2024.10831128
DO - 10.1109/SMC54092.2024.10831128
M3 - 会议稿件
AN - SCOPUS:85217881599
T3 - Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics
SP - 2561
EP - 2566
BT - 2024 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2024 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2024
Y2 - 6 October 2024 through 10 October 2024
ER -