Abstract for the talk on 11.06.2020 (17:00 h)Math Machine Learning seminar MPI MIS + UCLA
Poorya Mianjy (Johns Hopkins University)
Understanding the Algorithmic Regularization due to Dropout
See the video of this talk.
Algorithmic regularization provides deep learning models with capacity control that helps them generalize. In this talk, we focus on understanding such capacity control due to dropout training in various machine learning models including deep linear networks, matrix sensing, and two-layer ReLU networks. In particular, by characterizing the regularizer induced by dropout training, we give concrete generalization error bounds for the dropout training in these models.