Zusammenfassung für den Vortrag am 27.01.2022 (17:00 Uhr)Math Machine Learning seminar MPI MIS + UCLA
Daniel Russo (Columbia University)
Adaptivity and Confounding in Multi-armed Bandit Experiments
Siehe auch das Video dieses Vortrages.
Multi-armed bandit algorithms minimize experimentation costs required to converge on optimal behavior. They do so by rapidly adapting experimentation effort away from poorly performing actions as feedback is observed. But this desirable feature makes them sensitive to confounding, which is the primary concern underlying classical randomized controlled trials. We highlight, for instance, that popular bandit algorithms cannot address the problem of identifying the best action when day-of-week effects may confound inferences. In response, this paper proposes deconfounded Thompson sampling, which makes simple, but critical, modifications to the way Thompson sampling is usually applied. Theoretical guarantees suggest the algorithm strikes a delicate balance between adaptivity and robustness to confounding. It attains asymptotic lower bounds on the number of samples required to confidently identify the best action — suggesting optimal adaptivity — but also satisfies strong performance guarantees in the presence of day-of-week effects and delayed observations — suggesting unusual robustness. These issues are explored through a very general model of contextual bandit experiments. At the core of the paper is a new model of contextual bandit experiments in which issues of delayed learning and distribution shift arise organically.