跳到主要导航 跳到搜索 跳到主要内容

Interpretable policy derivation for reinforcement learning based on evolutionary feature synthesis

  • East China Normal University

科研成果: 期刊稿件文章同行评审

摘要

Reinforcement learning based on the deep neural network has attracted much attention and has been widely used in real-world applications. However, the black-box property limits its usage from applying in high-stake areas, such as manufacture and healthcare. To deal with this problem, some researchers resort to the interpretable control policy generation algorithm. The basic idea is to use an interpretable model, such as tree-based genetic programming, to extract policy from other black box modes, such as neural networks. Following this idea, in this paper, we try yet another form of the genetic programming technique, evolutionary feature synthesis, to extract control policy from the neural network. We also propose an evolutionary method to optimize the operator set of the control policy for each specific problem automatically. Moreover, a policy simplification strategy is also introduced. We conduct experiments on four reinforcement learning environments. The experiment results reveal that evolutionary feature synthesis can achieve better performance than tree-based genetic programming to extract policy from the neural network with comparable interpretability.

源语言英语
页(从-至)741-753
页数13
期刊Complex and Intelligent Systems
6
3
DOI
出版状态已出版 - 10月 2020

指纹

探究 'Interpretable policy derivation for reinforcement learning based on evolutionary feature synthesis' 的科研主题。它们共同构成独一无二的指纹。

引用此