Temporal logic motion control using actor–critic methods Journal Article uri icon

Overview

abstract

  • This paper considers the problem of deploying a robot from a specification given as a temporal logic statement about some properties satisfied by the regions of a large, partitioned environment. We assume that the robot has noisy sensors and actuators and model its motion through the regions of the environment as a Markov decision process (MDP). The robot control problem becomes finding the control policy which maximizes the probability of satisfying the temporal logic task on the MDP. For a large environment, obtaining transition probabilities for each state–action pair, as well as solving the necessary optimization problem for the optimal policy, are computationally intensive. To address these issues, we propose an approximate dynamic programming framework based on a least-squares temporal difference learning method of the actor–critic type. This framework operates on sample paths of the robot and optimizes a randomized control policy with respect to a small set of parameters. The transition probabilities are obtained only when needed. Simulations confirm that convergence of the parameters translates to an approximately optimal policy.

publication date

  • September 1, 2015

has restriction

  • green

Date in CU Experts

  • November 2, 2018 10:43 AM

Full Author List

  • Wang J; Ding X; Lahijanian M; Paschalidis IC; Belta CA

author count

  • 5

Other Profiles

International Standard Serial Number (ISSN)

  • 0278-3649

Electronic International Standard Serial Number (EISSN)

  • 1741-3176

Additional Document Info

start page

  • 1329

end page

  • 1344

volume

  • 34

issue

  • 10