Off-policy evaluation for tabular reinforcement learning with synthetic trajectories

Weiwei Wang, Yuqiang Li, Xianyi Wu

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

This paper addresses the problem of offline evaluation in tabular reinforcement learning (RL). We propose a novel method that leverages synthetic trajectories constructed from the available data using a “sampling with replacement” basis, combining the advantages of model-based and Monte Carlo policy evaluation. The method is accompanied by theoretically derived finite sample upper error bounds, offering performance guarantees and allowing for a trade-off between statistical efficiency and computational cost. The results from computational experiments demonstrate that our method consistently achieves lower upper error bounds and relative mean square errors compared to Importance Sampling, Doubly Robust methods, and other existing approaches. Furthermore, this method achieves these superior results in significantly shorter running times compared to traditional model-based approaches. These findings highlight the effectiveness and efficiency of this synthetic trajectory method for accurate offline policy evaluation in RL.

Original languageEnglish
Article number41
JournalStatistics and Computing
Volume34
Issue number1
DOIs
StatePublished - Feb 2024

Keywords

  • Importance sampling
  • Markov decision process
  • Off-policy evaluation
  • Reinforcement learning
  • Synthetic trajectories

Fingerprint

Dive into the research topics of 'Off-policy evaluation for tabular reinforcement learning with synthetic trajectories'. Together they form a unique fingerprint.

Cite this