Information Flows in Causal Networks
- Nihat Ay (Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany)
Abstract
Mathematical information theory provides an important framework for understanding cognitive processes. It has been successfully applied to neural systems displaying feed forward structures. It turns out that the analysis of recurrent structures is more subtle. This is mainly due to the fact that corresponding information-theoretic quantities allow for the ambiguity of their causal and associational interpretations. In order to understand information flows in recurrent networks, one has to make a clear distinction between causal and associational effects. In collaboration with Daniel Polani we addressed this problem using a causality theory developed by Judea Pearl and his coworkers. I will discuss some possible applications of this work to complexity theory.