Recurrent neural networks - Information Processing and self-organization at the Edge of Chaos

  • Oliver Obst (CSIRO, Sydney, Australia)
A3 02 (Seminar room)


We investigate information processing in randomly connected recurrent neural networks. It has been shown previously that the computational capabilities of these networks are maximized when the recurrent layer is close to the border between a stable and an unstable dynamics regime, the so called edge of chaos. The reasons, however, for this maximized performance are not completely understood. We adopt an information-theoretical framework and are for the first time able to quantify the computational capabilities between elements of these networks directly as they undergo the phase transition to chaos. Specifically, we present evidence that both information transfer and storage in the recurrent layer peak close to this phase transition, providing an explanation for why guiding the recurrent layer towards the edge of chaos is computationally useful. As a consequence, our work suggests self-organized ways of improving performance in recurrent neural networks, driven by both input data and the learning goal. This is in contrast to other self-organized approaches for adapting the recurrent layer, like intrinsic plasticity, which do not take the learning goal into account.