Reward-free offline reinforcement learning: Optimizing behavior policy via action exploration

Zhenbo Huang, Shiliang Sun, Jing Zhao

Research output: Contribution to journalArticlepeer-review

Abstract

Offline reinforcement learning (RL) aims to learn a policy from pre-collected data, avoiding costly or risky interactions with the environment. In the offline setting, the inherent problem of distribution shift leads to extrapolation error, resulting in policy learning failures. Conventional offline RL methods tackle this by reducing the value estimates of unseen actions or incorporating policy constraints. However, these methods confine the agent's actions within the data manifold, hampering the agent's capacity to acquire fresh insights from actions beyond the dataset's scope. To address this, we propose a novel offline RL method incorporating action exploration, called EoRL. We partition policy learning into behavior and exploration learning, where exploration learning empowers the agent to discover novel actions, while behavior learning approximates the behavior policy. Specifically, in exploratory learning, we define the deviation between decision actions and dataset actions as the action novelty, replacing the traditional reward with an assessment of the cumulative novelty of the policy. Additionally, behavior policy restricts actions to the vicinity of the dataset-supported actions, and the two parts of the policy learning share parameters. We demonstrate EoRL's ability to explore a larger action space while controlling the policy shift. And its reward-free learning model is more compatible with realistic task scenarios. Experimental results demonstrate the outstanding performance of our method on Mujoco locomotion and 2D maze tasks.

Original languageEnglish
Article number112018
JournalKnowledge-Based Systems
Volume299
DOIs
StatePublished - 5 Sep 2024

Keywords

  • Action exploration
  • Offline reinforcement learning
  • Reward-free learning

Fingerprint

Dive into the research topics of 'Reward-free offline reinforcement learning: Optimizing behavior policy via action exploration'. Together they form a unique fingerprint.

Cite this