

Preprint 22/2021
The Geometry of Memoryless Stochastic Policy Optimization in Infinite-Horizon POMDPs
Johannes Müller and Guido Montúfar
Contact the author: Please use for correspondence this email.
Submission date: 14. Oct. 2021
Bibtex
MSC-Numbers: 90C40, 93E20, 49M37, 90C23
Keywords and phrases: POMDPs, Memoryless Policies, Critical points, State-action frequencies, Algebraic degree
Link to arXiv: See the arXiv entry of this preprint.
Abstract:
We consider the problem of finding the best memoryless stochastic policy for an infinite-horizon partially observable Markov decision process (POMDP) with finite state and action spaces with respect to either the discounted or mean reward criterion. We show that the (discounted) state-action frequencies and the expected cumulative reward are rational functions of the policy, whereby the degree is determined by the degree of partial observability. We then describe the optimization problem as a linear optimization problem in the space of feasible state-action frequencies subject to polynomial constraints that we characterize explicitly. This allows us to address the combinatorial and geometric complexity of the optimization problem using recent tools from polynomial optimization. In particular, we demonstrate how the partial observability constraints can lead to multiple smooth and non-smooth local optimizers and we estimate the number of critical points.