Integrated Information Theory (IIT) and its variants have been proposed as theories of consciousness that link information integration with phenomenological properties of consciousness. A key ingredient of the theory is the complexity measure "Phi", loosely associated to "levels" of consciousness. However, due to its abstract and non-specific formulation, it has been challenging to compute Phi for realistic biological or robotic systems. In this talk, we discuss a modified complexity measure that can be analytically computed and evaluated for large networks. We then discuss how measures such as Phi might be part of a larger class of complexity measures necessary to understand cognitive systems and their internal representations. Next, we move to conceptual structures, which have been introduced in recent versions of IIT. These suggest yet another direction for extending IIT so as to be able to capture semantics of mental representations. How the brain generates meaning and makes sense of the world is crucial for for any theory of cognition and consciousness. To formalize semantics, we generalize the notion of meaning used in natural language via a functorial mapping between syntax and semantics in concept space. As an application, we show how this formal construction can be used for artificial perception systems.
First, quick presentation of some notions of Calculus of Variations in $L^{\infty}$ for scalar-valued functions, the Aronsson equation and absolute minimizers, example of the infinity-Laplacian. Then, I will talk about the case of vector-valued maps : the Aronsson system is incomplete, presentation of the full Aronsson system and of a recent type of weak solutions to this full system.