Incentivized Exploration for Multi-Armed Bandits under Reward Drift Journal Article uri icon



  • We study incentivized exploration for the multi-armed bandit (MAB) problem where the players receive compensation for exploring arms other than the greedy choice and may provide biased feedback on reward. We seek to understand the impact of this drifted reward feedback by analyzing the performance of three instantiations of the incentivized MAB algorithm: UCB, ε-Greedy, and Thompson Sampling. Our results show that they all achieve O(log T) regret and compensation under the drifted reward, and are therefore effective in incentivizing exploration. Numerical examples are provided to complement the theoretical analysis.

publication date

  • April 3, 2020

Date in CU Experts

  • November 8, 2020 12:06 PM

Full Author List

  • Liu Z; Wang H; Shen F; Liu K; Chen L

author count

  • 5

Other Profiles

International Standard Serial Number (ISSN)

  • 2159-5399

Electronic International Standard Serial Number (EISSN)

  • 2374-3468

Additional Document Info

start page

  • 4981

end page

  • 4988


  • 34


  • 04