Abstract for the talk on 26.08.2021 (17:00 h)

Math Machine Learning seminar MPI MIS + UCLA

Guodong Zhang (University of Toronto)
Differentiable Game Dynamics: Hardness and Complexity of Equilibrium Learning
See the video of this talk.

Often, the training of many machine learning models can be formulated as a single-objective optimization (minimization) problem, which can be efficiently solved by gradient-based optimization methods. However, there is a growing number of models that involve multiple interacting objectives. Differentiable game is a generalization of the standard single-objective optimization framework, allowing us to model multiple players and objectives. However, new issues and challenges arise in solving differentiable games. Standard gradient descent-ascent algorithm could either converge to non-equilibrium fixed points or converge with a slow rate (if it does converge). In this talk, I will present some of my work attacking both problems. Besides, I will introduce a unified and systematic framework for global convergence analysis of first-order methods in solving strongly-monotone games.

 

18.10.2021, 14:55