Conceptual and Mathematical Foundations of Embodied Intelligence

Abstracts of the talks

Nihat Ay
Max Planck Institute for Mathematics in the Sciences, Germany
On the role of mathematics within the field of embodied intelligence

See the video of this talk.

I will review prime concepts of the field of embodied intelligence and their far-reaching implications for our understanding of intelligence. In order to incorporate these concepts within a unifying mathematical structure, I will propose a formal model of the sensori-motor loop. I will demonstrate its generality by addressing various important subjects of the field in a mathematically rigorous way and thereby outline corresponding research directions.

Randall D. Beer
Indiana University, USA
Information and dynamics in brain-body-environment systems

See the video of this talk.

Within cognitive science, there has been considerable debate about the relative merits of information-processing vs. dynamical approaches to understanding cognitive processes. This talk will adopt the position that the mathematical theories underlying these two approaches - information theory and dynamical systems theory, respectively - are best viewed as distinct mathematical lenses through which one can examine the operation of any system of interest. Thus, the concern should not be which approach to cognition is "right", but rather the different sorts of explanations that each lens reveals and the interrelationships between these explanations when both lenses are applied to the same cognitive system.

In order to explore these issues, I will describe the analysis of a model agent evolved to solve a relational categorization task. In this task, an agent presented with two falling objects of different sizes in sequence must catch the second object if it is smaller than the first and avoid it otherwise. Interestingly, both largely disembodied and strongly embodied strategies evolve to make this relational judgement. After a brief review of separate dynamical and information-theoretic analyses of the operation of these agents, I will focus on examining the ways in which these two explanations connect. This talk describes joint work with Paul Williams.

Paul Bourgine
École Polytechnique, France
Paradigms and models of embodied intelligence

See the video of this talk.

The first part of the talk is about the paradigm of embodied cognition intelligence. Starting from the main paradigms of cognition, a unified paradigm of cognition will be proposed using insights from Jean Piaget, Charles Sanders Pierce and Francisco Varela. This unified paradigm of cognition fits the notion of Embodied Intelligence very well. In particular, the criterium of success for all kinds of embodied intelligence is viability. Then, important qualitative known facts about the neural network and its relation to the body will be used for proposing different paradigmatic levels of embodied intelligence.

The second part of the talk is thus proposing three mathematical models of embodied intelligence, the reactive, the emotional and the predictive embodied agent. The first mathematical model is about a pure reactive embodied agent constrained to remain in its viability domain under perturbations of the environment: it can be modelled as a random dynamical system in a constrained domain. The second mathematical model is enlarging the previous one with an emotional embodied agent able to use reinforcement learning for selecting its strategies without any model of its own sensorimotor dynamics: it can be modelled as an aged random dynamical inclusion in a viability tube, in the sense of viability theory. The third mathematical model is still enlarging the previous one with a predictive embodied agent able to construct models of its own sensorimotor dynamics with its uncertainties and dealing with the compromise exploration/exploitation for inventing new strategies in difficult and noisy environment.

The conclusion is summarizing these three levels of embodied intelligence, reactive, emotional and predictive. It is furthermore looking at the link between embodied intelligence and collective intelligence for these three levels.

Paradigm of Embodied Intelligence

  • Cheryl J. Misak, editor. The Cambridge companion to Peirce. Cambridge University Press, Cambridge, U.K., 2004.
  • Jean Piaget. Genetic epistemology. Norton, New York, 1971.
  • Francisco J. Varela, Evan Thompson, and Eleanor Rosch. The embodied mind: cognitive science and human experience. MIT Press, Cambridge, Mass., 1991.

Models of Embodied Intelligence

  • Teuvo Kohonen and Timo Honkela. Kohonen network. Scholarpedia journal, 2(1):1568, 2007.
  • Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. Dissertation, King's College, 1989.
  • Shun'ichi Amari and Hiroshi Nagaoka. Methods of information geometry, volume 191 of Translations of mathematical monographs. American Mathematical Society, Providence, RI, 2000.
  • Jean-Pierre Aubin: Viability theory. Systems & control: foundations & applications. Birkhäuser, Boston, 1991.
  • Boris Hasselblatt, editor. Handbook of dynamical systems, Elsevier, Amsterdam, 2002.

Karl Friston
University College London, United Kingdom
Embodied inference and free energy

See the video of this talk.

How much about our interactions with - and experience of - our world can be deduced from basic principles? This talk reviews recent attempts to understand the self-organised behaviour of embodied agents, like ourselves, as satisfying basic imperatives for sustained exchanges with the environment. In brief, one simple driving force appears to explain many aspects of perception, action and the perception of action.
This driving force is the minimisation of surprise or prediction error that - in the context of perception - corresponds to Bayes-optimal predictive coding (that suppresses exteroceptive prediction errors) and - in the context of action - reduces to classical motor reflexes (that suppress proprioceptive prediction errors). In what follows, we look at some of the phenomena that emerge from this single principle; such as the perceptual encoding of spatial trajectories that can both generate movement (of self) and recognise the movements (of others). These emergent behaviours rest upon prior beliefs about itinerant states of the world - but where do these beliefs come from?

We will focus on recent proposals about the nature of prior beliefs and how they underwrite the active sampling of a spatially extended sensorium. Put simply, to minimise surprising states of the world, it is necessary to sample inputs that minimise uncertainty about the causes of sensory input. When this minimisation is implemented via prior beliefs - about how we sample the world - the resulting behaviour is remarkably reminiscent of searches of the sort seen in exploration or measured, in visual searches, with saccadic eye movements.

Keyan Ghazi-Zahedi
Max Planck Institute for Mathematics in the Sciences, Germany
Quantifying morphological computation

See the video of this talk.

The field of embodied intelligence emphasises the importance of the morphology and environment with respect to the behaviour of a cognitive system. The contribution of the morphology to the behaviour, commonly known as morphological computation, is well-recognised in this community. We believe that the field would benefit from a formalisation of this concept as we would like to ask how much the morphology and the environment contribute to an embodied agent's behaviour, or how an embodied agent can maximise the exploitation of its morphology within its environment.

In my talk, I will present first steps towards a quantification of morphological computation in the context of embodied intelligence. I will propose various corresponding measures which are validated in simple experiments, thereby identifying their individual strengths and weaknesses. This is joint work with Nihat Ay.

David Krakauer
University of Wisconsin-Madison, USA
Agents and their artefacts: a natural history of ex-bodiment

See the video of this talk.

The evolution of cognition is often presented as a series of anatomical and inferential adaptations within a lineage: vision, motor control etc. Here I shall review how the environment can be used to outsource mental representations and used to simplify problems of inference. This "exbodied cognition" is widespread in nature, but reaches an extreme form in our own species, without which we would have evolved no material culture. To speak of human cognition without written symbols, and increasingly, mechanical mental buttresses such a calculating devices, is to ignore the most prominent feature of H. Sapiens' evolutionary and cultural history.

Georg Martius
Max Planck Institute for Mathematics in the Sciences, Germany
Information driven self-organization of complex robotic behaviors

See the video of this talk.

Autonomy is a puzzling phenomenon in nature and a major challenge in the world of artifacts. A key feature of autonomy in both natural and artificial systems is seen in the ability for independent exploration. In animals and humans, the ability to modify its own pattern of activity is not only an indispensable trait for adaptation and survival in new situations, it also provides a learning system with novel information for improving its cognitive capabilities, and it is essential for development.

Efficient exploration in high-dimensional spaces is a major challenge in building learning systems. We propose to implement the exploration as a deterministic law derived from maximizing an information quantity. More specifically we use the predictive information of the sensor process (of a robot) to obtain an update rule (exploration dynamics) of the controller parameters. To be adequate in robotics application the non-stationary nature of the underlying time-series have to be taken into account, which we do by proposing the time-local predictive information (TiPI). Importantly the exploration dynamics is derived analytically and by this we link information theory and dynamical systems.

Without a random component the change in the parameters is deterministically given as a function of the states in a certain time window. For an embodied system this means in particular that constraints, responses and current knowledge of the dynamical interaction with the environment can directly be used to advance further exploration. Randomness is replaced with spontaneity which we demonstrate to restrict the search space automatically to the physically relevant dimensions. Its effectiveness will be presented with various experiments on high-dimensional robotic system and we argue that this is a promising way to avoid the curse of dimensionality. This talk describes joint work with Ralf Der and Nihat Ay.

Thomas Metzinger
Johannes Gutenberg-Universität Mainz, Germany
Body-representation and self-consciousness: from embodiment to minimal phenomenal selfhood

See the video of this talk.

As a philosopher, I am interested in the relationship between body representation and the deep structure of self-consciousness. How, precisely, does one describe the grounding relations holding between different levels of embodiment? In analogy to the "symbol grounding problem" one might also call this the "self grounding problem", the problem of describing the principles as well as the mechanics by which a system's phenomenal self-model (PSM; cf. Metzinger 2003; 2007a) is anchored in low-level physical dynamics. My specific epistemic goal in this lecture will be the simplest form of phenomenal self-consciousness: What exactly are the essential non-conceptual, pre-reflexive layers in conscious self-representation? What constitutes a minimal phenomenal self? Conceptually, I will defend the claim that agency is not part of the metaphysically necessary supervenience-basis for bodily self-consciousness. Empirically, I will draw on recent research focusing on out-of-body experiences (OBEs) and full-body illusions (FBIs). I will then proceed to sketch a new research program and advertise a new research target: "Minimal Phenomenal Selfhood", ending with an informal argument for the thesis that agency or "global control", phenomenologically as well as functionally, is not a necessary condition for self-consciousness.

  • Thomas Metzinger. Being no one: the self-model theory of subjectivity. A Bradford book. MIT Press, Cambridge, Mass., 2003.
  • Thomas Metzinger. The ego tunnel: the science of the mind and the myth of the self. Basic Books, New York, 2009.
  • Thomas Metzinger. Self Models. Scholarpedia journal, 2 (10):4174, 2007.
  • Bigna Lenggenhager, Tej Tadi, Thomas Metzinger, and Olaf Blanke.
    Video ergo sum: manipulating bodily self-consciousness. Science, 317(5841):1096-1099, 2007.
  • Olaf Blanke and Thomas Metzinger. Full-body illusions and minimal phenomenal selfhood. Trends in cognitive sciences, 13(1):7-13, 2008.
  • Thomas Metzinger. Empirische Perspektiven aus Sicht der Selbstmodell-Theorie der Subjektivität: eine Kurzdarstellung mit Beispielen. Technical report, 2012.

Kevin O'Regan
Centre National de Recherche Scientifique, France
(joint work with Alban Laflaquière, Alexander Terekhov)
A theoretical basis for how artificial or biological agents can construct the basic notion of space

See the video of this talk.

The brain sitting inside its bony cavity sends and receives myriads of sensory inputs and outputs. A problem that must be solved either in ontogeny or phylogeny is to extract the particular characteristics within this "blooming buzzing confusion" that signal the existence and nature of physical space, with structured objects immersed in it, among them the agent's body. We show how a biological (or artificial) agent with arbitrary sensors can discover the existence of one important aspect of space, namely rigid displacements, without any prior knowledge about the structure of its sensors, its body, or of the world. Following an idea of Henri Poincaré, the method involves examining the compensable relations between the sensorimotor contingencies linking sensory and motor variables. Once acquired, the notion of rigid displacement will allow the agent to manifest apparently spatial knowledge in its behaviours.

Frank Pasemann
Universität Osnabrück, Germany
Neurodynamics in the sensorimotor loop

See the video of this talk.

On the background of Evolutionary Robotics techniques certain aspects of neurodynamics in the sensorimotor loop are presented which are related to the observed behaviour of autonomous robots. Referring to standard discrete-time neural networks we will, among others, discuss the role of bifurcation theory and feedback control, dynamical equivalence and structural stability of neurodynamical systems, attractor morphing, behaviour representation by basins of attraction, and the possible role of chaos.

Daniel Polani
University of Hertfordshire, United Kingdom
The role of information in formation of cognitive organization

See the video of this talk.

Understanding the emergence of complex cognitive architectures during evolution, morphogenesis (evolution's "little sister") and life-time adaptation is faced with a dilemma: there is a quickly increasing understanding of the local rules that lead to the success of various schemes. However, the big picture of how and why these local processes work together seamlessly is still very murky.

In the last years, Shannon's information theory has demonstrated great value in characterizing constraints and bounds on decision making and cognitive tasks. This is partly due to the fact that decision making is subject to informational limitations which can be quantitatively characterized. It also relies on a high-level hypothesis that, in a quasi-equilibrium, and all other constraints kept equal, evolution tends to optimize an organism's cognitive processing in a suitable way.

Various incarnations of this generic idea are being studied, such as "information parsimony", "predictive information maximization", "empowerment maximization", or "compression progress" (the latter in a Kolmogorov setting). The talk will discuss a selection of such principles as to how they can provide a "top-down"-like understanding of the emergence of cognitive organizations which is not tied to a bottom-up understanding of the mechanisms which would implement them.

Helge Ritter
Universität Bielefeld, Germany
Manual intelligence and embodiment

See the video of this talk.

Our hands belong to our most remarkable interfaces: they are the basis for getting "into touch with the world" and mastering the developmental steps that range from initial contact to skillful physical control of a huge variety of objects. This involves a rich shaping of physical contacts to enable active sensing, haptic recognition, purposeful manipulation, and ultimately emotional expression through gesture and physical contact. Most of these skills are very strongly shaped by the embodiment of our hand-arm-eye system and the pervading role of touch sensing for the control of physical contact patterns.

A deeper understanding of the "manual intelligence" manifested in these capabilities is likely to require the integration of insights from many disciplines. The present talk will provide a perspective biased from robotics and approach manual intelligence from the side of its technical synthesis: what does it require to replicate parts of manual intelligence on anthropomorphic robot hands, what insights can we gain from such attempts, and what are useful cross-connections with disciplines that range from physics and mathematics to brain science? Embodiment will be an overarching aspect, and we will take the view that manual intelligence is the exploitation of the interactional possibilities that arise from the encounter of two different embodiments: that of the hand-arm system, and that of the to-be-handled part of our environment. We will present a range of examples starting from simplified rigid-body situations and leading upward to manual skills involving the control even of non-rigid objects, such as the folding of paper. We will conclude with some challenges for future research.

Jürgen Schmidhuber
Istituto Dalle Molle di Studi sull'Intelligenza Artificiale, Switzerland, and Technische Universität München, Germany
Optimal AI - neural network ReNNaissance - theory of fun

See the video of this talk.

Our fast, deep/recurrent neural networks have many biologically plausible, non-linear processing stages. They won eight recent international pattern recognition competitions, and set records in many vision benchmarks, contributing to the ongoing second Neural Network ReNNaissance. We are starting to use them in active, unsupervised, curious, creative systems of a type we introduced in 1990. They learn to sequentially shift attention towards informative inputs, not only solving externally posed tasks, but also their own self-generated tasks designed to improve their understanding of the world according to our Formal Theory of Fun and Creativity, which requires two interacting modules:

  1. an adaptive (possibly neural) predictor or compressor or model of the growing data history as the agent is interacting with its environment, and
  2. a (possibly neural) reinforcement learner.

The learning progress of (1.) is the FUN or intrinsic reward of (2.). That is, (2.) is motivated to invent skills leading to interesting or surprising novel patterns that (1.) does not yet know but can easily learn (until they become boring). We discuss how this simple principle explains science & art & music & humor. Time permitting, I'll also briefly discuss the recent theoretically optimal universal problem solvers pioneered in our lab, such as Gödel machines and the asymptotically fastest algorithm for all well-defined problems.

Overview pages with papers on the touched subjects:
Some videos on this:

Naftali Tishby
The Hebrew University, Israel
Information flow in sensation & action and the emergence of [reverse] hierarchies

See the video of this talk.

Starting form Large Deviation Theory (Sanov's theorem) we can obtain the connection between the reward rate and the control and sensing information capacities, for systems in "metabolic information equilibrium" with stationary stochastic environments (Tishby & Polani, 2010). This result can be considered as an equilibrium characterisation for systems that achieved a certain value through interactions with the environment, but have no new learning (e.g. "stupid" cleaning robots). The affect of learning can be considered by revisiting the sub-extensivity of predictive information in stationary environments (Bialek, Nemenman & Tishby 2002) and combining it with the requirement of computational tractability of planning. We argue that planning is possible if the information flow terms remain proportional to the reward terms on the one hand, but still bounded by the sub-extensive predictive information on the other hand.

I will discuss the possible implications of this new computational principle to the emergence of hierarchical representations and discounting of rewards in our generalised Bellman equation.


Date and Location

February 27 - March 01, 2013
Max Planck Institute for Mathematics in the Sciences
Inselstraße 22
04103 Leipzig
see travel instructions

Scientific Organizers

Nihat Ay
Max Planck Institute for Mathematics in the Sciences

Ralf Der
Max Planck Institute for Mathematics in the Sciences

Keyan Ghazi-Zahedi
Max Planck Institute for Mathematics in the Sciences

Georg Martius
Max Planck Institute for Mathematics in the Sciences

Administrative Contact

Antje Vandenberg
Max Planck Institute for Mathematics in the Sciences
Contact by Email

Financial Support

14.03.2017, 14:59