Search

Talk

Shaping concepts in the dynamics of recurrent neural networks

  • Herbert Jäger (Jacobs University Bremen)
A3 01 (Sophus-Lie room)

Abstract

Humans can process concepts. They can learn, store and retrieve discrete memory items, connect them by logical operations, classify sensor input in terms of categories, attach symbolic labels to represented items, and carry out so many more fascinating "high-level" information processing operations. Humans do this with their brains, and these brains are dynamical systems of supreme complexity – nonlinear, high-dimensional, stochastic, multiscale, adaptive – all in one. Since decades this has fuelled a scientific quest to understand how neurodynamical systems can support conceptual information processing. This question has been approached from many angles, using a wealth of methods and levels of description and analysis. In my talk I will outline yet another approach to model how conceptual information processing can arise in the dynamics of recurrent neural networks. The core of this approach is to employ certain linear operators, called conceptors, which constrain the evolving dynamics of a recurrent neural network. These operators can be identified with "concepts" represented in the ongoing neural dynamics. Conceptors can be morphed and combined with Boolean operations. This endows recurrent neural networks with mechanisms to store and retrieve, generate, logically combine, morph, abstract and focus dynamical patterns. I will give an intuitive introduction to the formal theory of conceptor dynamics, and present a set of exemplary simulation studies.

Katharina Matschke

MPI for Mathematics in the Sciences Contact via Mail