Search

Workshop

Explaining the Decisions of Deep Neural Networks

  • Grégoire Montavon (Machine Learning, Technische Universität Berlin)
E1 05 (Leibniz-Saal)

Abstract

ML models such as deep neural networks (DNNs) are capable of producing complex real-world predictions. In order to get insight into the workings of the model and verify that the model is not overfitting the data, it is often desirable to explain its predictions. For linear and mildly nonlinear models, simple techniques based on Taylor expansions can be used, however, for highly nonlinear DNN models, the task of explanation becomes more difficult.

In this talk, we first discuss some motivations for explaining predictions, and specific challenges in producing them. We then introduce the LRP technique which explains by reverse-propagating the prediction in the network through a set of engineered propagation rules. The reverse propagation procedure can be interpreted as a ‘deep Taylor decomposition’ where the explanation is the outcome of a sequence of Taylor expansions performed at each layer of the DNN model.

Links

Valeria Hünniger

Max-Planck-Institut für Mathematik in den Naturwissenschaften Contact via Mail

Guido Montúfar

Max Planck Institute for Mathematics in the Sciences