Search

Workshop

Basic principles of supervised and unsupervised learning: toward understanding deep learning

  • Shun-ichi Amari (RIKEN, Japan)
E1 05 (Leibniz-Saal)

Abstract

Both supervised and unsupervised techniques are used in Deep Learning. We show a simple model of self-organization (Hebbian learning) to understand how new representations of signals are organized. We then show self-organization of Restricted Boltzmann machines and auto-encoder (recurrently connected neural networks). They reveal the way of information representations in a hierarchical system. The multilayer perceptron is used for supervised learning. However, its space of parameters includes lots of singularities, which cause difficulty in learning. We analyze the dynamics of supervised learning near singularities, and demonstrate that the natural gradient descent learning method improves the efficiency of learning largely.

Links

Marion Lange

Stuttgart University / TU Berlin, Germany Contact via Mail

Nihat Ay

Max Planck Institute for Mathematics in the Sciences (Leipzig), Germany

Marc Toussaint

Stuttgart University, Germany