Abstract for the talk on 07.05.2018 (15:30 h)

Special Seminar

David Spivak (MIT - Department of Mathematics, USA)
Backprop as a Functor: a compositional perspective on supervised learning

Neural networks can be trained to perform functions, such as classifying images. The usual description of this process involves keywords like neural architecture, activation function, cost function, back propagation, training data, weights and biases, and weight-tying.

In this talk we will define a symmetric monoidal category Learn, in which objects are sets and morphisms are roughly "functions that adapt to training data". The back propagation algorithm can then be viewed as a strong monoidal functor from a category of parameterized functions between Euclidean spaces to our category Learn.

This presentation is algebraic, not algorithmic; in particular it does not give immediate insight into improving the speed or accuracy of neural networks. The point of the talk is simply to articulate the

various structures that one observes in this subject—including all the keywords mentioned above—and thereby get a categorical foothold for further study. For example, by articulating the structure in this way, we find functorial connections to the well-established category of lenses in database theory and the much more recent category of compositional economic games.

 

09.05.2018, 02:30