Policy Synthesis and Reinforcement Learning for Discounted LTL Chapter uri icon

Overview

abstract

  • AbstractThe difficulty of manually specifying reward functions has led to an interest in using linear temporal logic (LTL) to express objectives for reinforcement learning (RL). However, LTL has the downside that it is sensitive to small perturbations in the transition probabilities, which prevents probably approximately correct (PAC) learning without additional assumptions. Time discounting provides a way of removing this sensitivity, while retaining the high expressivity of the logic. We study the use of discounted LTL for policy synthesis in Markov decision processes with unknown transition probabilities, and show how to reduce discounted LTL to discounted-sum reward via a reward machine when all discount factors are identical.

publication date

  • January 1, 2023

has restriction

  • closed

Date in CU Experts

  • July 19, 2023 4:30 AM

Full Author List

  • Alur R; Bastani O; Jothimurugan K; Perez M; Somenzi F; Trivedi A

author count

  • 6

Other Profiles

International Standard Book Number (ISBN) 13

  • 9783031377051

Additional Document Info

start page

  • 415

end page

  • 435