Abstract for the talk on 29.04.2021 (17:00 h)

Math Machine Learning seminar MPI MIS + UCLA

Mehrdad Farajtabar (DeepMind)
Catastrophic Forgetting in Continual Learning of Neural Networks
29.04.2021, 17:00 h, only video broadcast

Artificial Neural networks are achieving state of the art and sometimes superhuman performance on learning tasks across a variety of domains. Whenever these problems require learning in a continual or sequential manner, however, neural networks suffer from the problem of catastrophic forgetting; they forget how to solve previous tasks after being trained on a new task, despite having the essential capacity to solve both tasks if they were trained on both simultaneously. In this talk, we introduce this phenomena and propose a few methods to address this issue from a variety of aspects.

Bio:

Mehrdad Farajtabar is a research scientist in Google DeepMind working on machine learning and applications. His recent research interests are continual learning of neural networks, learning under evolving data distributions and reinforcement learning. Before joining DeepMind he graduated with PhD in computational science and engineering from Georgia Tech in 2018 and holds M.Sc. and B.Sc. degrees in Artificial Intelligence and Software Engineering from Sharif University of Technology.

If you want to participate in this video broadcast please register using this special form. The (Zoom) link for the video broadcast will be sent to your email address one day before the seminar.

19.04.2021, 10:24