Search

Talk

Outcome-Driven Reinforcement Learning via Variational Inference

  • Tim G. J. Rudner (University of Oxford)
Live Stream

Abstract

While reinforcement learning algorithms provide automated acquisition of optimal policies, practical application of such methods requires a number of design decisions, such as manually designing reward functions that not only define the task, but also provide sufficient shaping to accomplish it. In this paper, we discuss a new perspective on reinforcement learning, recasting it as the problem of inferring actions that achieve desired outcomes, rather than a problem of maximizing rewards. To solve the resulting outcome-directed inference problem, we establish a novel variational inference formulation that allows us to derive a well-shaped reward function which can be learned directly from environment interactions. From the corresponding variational objective, we also derive a new probabilistic Bellman backup operator reminiscent of the standard Bellman backup operator and use it to develop an off-policy algorithm to solve goal-directed tasks. We empirically demonstrate that this method eliminates the need to design reward functions and leads to effective goal-directed behaviors.

Tim G. J. Rudner is a PhD Candidate in the Department of Computer Science at the University of Oxford, supervised by Yarin Gal and Yee Whye Teh. His research interests span Bayesian deep learning, reinforcement learning, and variational inference. He holds a master’s degree in statistics from the University of Oxford and an undergraduate degree in mathematics and economics from Yale University. Tim is also an AI Fellow at Georgetown University's Center for Security and Emerging Technology (CSET), a Fellow of the German National Academic Foundation, and a Rhodes Scholar.

seminar
5/2/24 5/16/24

Math Machine Learning seminar MPI MIS + UCLA

MPI for Mathematics in the Sciences Live Stream

Katharina Matschke

MPI for Mathematics in the Sciences Contact via Mail

Upcoming Events of This Seminar