TY - GEN
T1 - Adversarial Discriminative Feature Separation for Generalization in Reinforcement Learning
AU - Liu, Yong
AU - Wu, Chunwei
AU - Xi, Xidong
AU - Li, Yan
AU - Cao, Guitao
AU - Cao, Wenming
AU - Wang, Hong
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Imporving the generalization ability of an agent is an important and challenging task in deep reinforcement learning (RL). Procedually generated environment is an important benchmark for testing generalization in deep RL. In this benchmark, each game consists of multiple levels, each level is an algorithmically created environment instance with a unique configuration of its factors of variation. Existing methods (e.g., regularization, data augmentation) for improving the generalization of RL agent do not learn well the invariant representation among multiple levels. Besides, existing methods for learning invariant representations in RL using adversarial training can only learn invariant information across two levels. To solve this problem, we propose Adversarial Discriminative Feature Separate (ADFS). First, ADFS design a new discriminator for distinguishing whether two observations belong to the same level. Thus, the policy encoder is encouraged to learn invariant information between multiple levels. Second, it separates the representation of observation into level-invariant features and level-discriminative features, so that correction of the optimization direction of the discriminator. The discriminative features are learned by reducing the similarity of specific features intra-levels and increasing that of inter-levels, respectively. Experimental results demonstrate that our method is quite competitive with existing state-of-the-art methods on Procgen Benchmark.
AB - Imporving the generalization ability of an agent is an important and challenging task in deep reinforcement learning (RL). Procedually generated environment is an important benchmark for testing generalization in deep RL. In this benchmark, each game consists of multiple levels, each level is an algorithmically created environment instance with a unique configuration of its factors of variation. Existing methods (e.g., regularization, data augmentation) for improving the generalization of RL agent do not learn well the invariant representation among multiple levels. Besides, existing methods for learning invariant representations in RL using adversarial training can only learn invariant information across two levels. To solve this problem, we propose Adversarial Discriminative Feature Separate (ADFS). First, ADFS design a new discriminator for distinguishing whether two observations belong to the same level. Thus, the policy encoder is encouraged to learn invariant information between multiple levels. Second, it separates the representation of observation into level-invariant features and level-discriminative features, so that correction of the optimization direction of the discriminator. The discriminative features are learned by reducing the similarity of specific features intra-levels and increasing that of inter-levels, respectively. Experimental results demonstrate that our method is quite competitive with existing state-of-the-art methods on Procgen Benchmark.
KW - adversarial training
KW - discriminator
KW - generalization
KW - level-discriminative features
UR - https://www.scopus.com/pages/publications/85140745965
U2 - 10.1109/IJCNN55064.2022.9892539
DO - 10.1109/IJCNN55064.2022.9892539
M3 - 会议稿件
AN - SCOPUS:85140745965
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2022 International Joint Conference on Neural Networks, IJCNN 2022 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 International Joint Conference on Neural Networks, IJCNN 2022
Y2 - 18 July 2022 through 23 July 2022
ER -