The statistical foundations of learning to control
- Benjamin Recht (University of California, Berkeley)
Abstract
Given the dramatic successes in machine learning over the past half decade, there has been a resurgence of interest in applying learning techniques to continuous control problems in robotics, self-driving cars, and unmanned aerial vehicles. Though such applications appear to be straightforward generalizations of what is known as reinforcement learning, few fundamental baselines have been established prescribing how well one must know a system in order to control it. In this talk, I will discuss how one might merge techniques from statistical learning theory with robust control to derive baselines for such continuous control. I will further describe how these simple baselines give us insights into shortcomings of existing reinforcement learning methodology. I will close by listing several exciting open problems that must be solved before we can build robust, safe learning systems that interact with an uncertain physical environment.
Joint work with Sarah Dean, Aurelia Guy, Horia Mania, Nikolai Matni, and Stephen Tu.