跳到主要导航 跳到搜索 跳到主要内容

Off-policy evaluation for tabular reinforcement learning with synthetic trajectories

科研成果: 期刊稿件文章同行评审

摘要

This paper addresses the problem of offline evaluation in tabular reinforcement learning (RL). We propose a novel method that leverages synthetic trajectories constructed from the available data using a “sampling with replacement” basis, combining the advantages of model-based and Monte Carlo policy evaluation. The method is accompanied by theoretically derived finite sample upper error bounds, offering performance guarantees and allowing for a trade-off between statistical efficiency and computational cost. The results from computational experiments demonstrate that our method consistently achieves lower upper error bounds and relative mean square errors compared to Importance Sampling, Doubly Robust methods, and other existing approaches. Furthermore, this method achieves these superior results in significantly shorter running times compared to traditional model-based approaches. These findings highlight the effectiveness and efficiency of this synthetic trajectory method for accurate offline policy evaluation in RL.

源语言英语
文章编号41
期刊Statistics and Computing
34
1
DOI
出版状态已出版 - 2月 2024

指纹

探究 'Off-policy evaluation for tabular reinforcement learning with synthetic trajectories' 的科研主题。它们共同构成独一无二的指纹。

引用此