TY - GEN
T1 - Asynchronous classification-based optimization
AU - Liu, Yu Ren
AU - Hu, Yi Qi
AU - Qian, Hong
AU - Yu, Yang
N1 - Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019/10/13
Y1 - 2019/10/13
N2 - Asynchronous parallelization is an effective way to accelerate optimization. While asynchronous parallelization can destroy the sequential structure of optimization algorithms, it has been found counter-intuitively that some optimization algorithms are proven to preserve their performance under asynchronous parallelization, including the stochastic gradient descent for first-order optimization of differentiable functions and Pareto optimization for zeroth-order optimization in binary space. Following this direction, in this paper, we show that the classification-based optimization, which is a recently developed framework for zeroth-order optimization in continuous space, can also enjoy the asynchronous parallelization. We implement ASRacos, an asynchronous version of a classification-based optimization algorithm SRacos, to accelerate the optimization through asynchronous parallelization. We theoretically provide the query complexity of ASRacos and further show that on certain conditions, ASRacos can achieve a better performance than SRacos even if using the same number of evaluations. Experiments on synthetic functions and controlling tasks in OpenAI Gym demonstrate that ASRacos can achieve almost linear speedup while preserving good solution quality.
AB - Asynchronous parallelization is an effective way to accelerate optimization. While asynchronous parallelization can destroy the sequential structure of optimization algorithms, it has been found counter-intuitively that some optimization algorithms are proven to preserve their performance under asynchronous parallelization, including the stochastic gradient descent for first-order optimization of differentiable functions and Pareto optimization for zeroth-order optimization in binary space. Following this direction, in this paper, we show that the classification-based optimization, which is a recently developed framework for zeroth-order optimization in continuous space, can also enjoy the asynchronous parallelization. We implement ASRacos, an asynchronous version of a classification-based optimization algorithm SRacos, to accelerate the optimization through asynchronous parallelization. We theoretically provide the query complexity of ASRacos and further show that on certain conditions, ASRacos can achieve a better performance than SRacos even if using the same number of evaluations. Experiments on synthetic functions and controlling tasks in OpenAI Gym demonstrate that ASRacos can achieve almost linear speedup while preserving good solution quality.
KW - Derivative-free optimization
KW - Distributed optimization
KW - Hyper-parameter optimization
KW - Non-convex optimization
UR - https://www.scopus.com/pages/publications/85075417642
U2 - 10.1145/3356464.3357709
DO - 10.1145/3356464.3357709
M3 - 会议稿件
AN - SCOPUS:85075417642
T3 - ACM International Conference Proceeding Series
BT - Proceedings of the 1st International Conference on Distributed Artificial Intelligence, DAI 2019
PB - Association for Computing Machinery
T2 - 1st International Conference on Distributed Artificial Intelligence, DAI 2019
Y2 - 13 October 2019 through 15 October 2019
ER -