Zusammenfassung für den Vortrag am 05.11.2020 (17:00 Uhr)

Math Machine Learning seminar MPI MIS + UCLA

Kenji Kawaguchi (MIT)
Deep learning: theoretical results on optimization and mixup

Deep neural networks have achieved significant empirical success in many fields, including the fields of computer vision, machine learning, and artificial intelligence. Along with its empirical success, deep learning has been theoretically shown to be attractive in terms of its expressive power. However, the theory of the expressive power does not ensure that we can efficiently find an optimal solution in terms of optimization, robustness, and generalization, during the optimization process of a neural network. In this talk, I will discuss some theoretical results on optimization and the effect of mixup on robustness and generalization.

 

07.11.2020, 02:31