Empirical Gittins index strategies with ε-explorations for multi-armed bandit problems

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

The machine learning/statistics literature has so far considered largely multi-armed bandit (MAB) problems in which the rewards from every arm are assumed independent and identically distributed. For more general MAB models in which every arm evolves according to a rewarded Markov process, it is well known the optimal policy is to pull an arm with the highest Gittins index. When the underlying distributions are unknown, an empirical Gittins index rule with ε-exploration (abbreviated as empirical ε-Gittinx index rule) is proposed to solve such MAB problems. This procedure is constructed by combining the idea of ε-exploration (for exploration) and empirical Gittins indices (for exploitation) computed by applying the Largest-Remaining-Index algorithm to the estimated underlying distribution. The convergence of empirical Gittins indices to the true Gittins indices and expected discounted total rewards of the empirical ε-Gittinx index rule to those of the oracle Gittins index rule is provided. A numerical simulation study is demonstrated to show the behavior of the proposed policies, and its performance over the ε-mean reward is discussed.

Original languageEnglish
Article number107610
JournalComputational Statistics and Data Analysis
Volume180
DOIs
StatePublished - Apr 2023

Keywords

  • Empirical Gittins index
  • Gittins index
  • Multi-armed bandit problem
  • Reinforcement learning
  • Rewarded Markov process

Fingerprint

Dive into the research topics of 'Empirical Gittins index strategies with ε-explorations for multi-armed bandit problems'. Together they form a unique fingerprint.

Cite this