In the last few decades, an overwhelming number of case studies produced the evidence that intelligent behavior of naturally evolved agents efficiently involves the embodiment as part of the underlying control process. Nowadays, there is no question that the exploration and exploitation of the embodiment represent important mechanisms of cognition. The shift from the classical view to the modern embodied view, also referred to as the cognitive turn, not only framed a novel way of thinking about intelligence but also identified a number of fundamental principles that intelligent systems obey. Well known examples are the principle of cheap design, morphological computation, and information self-structuring. Although there is general consensus on the intuitive meaning of such principles, the field of embodied intelligence currently lacks a formal theory. We think that the mathematical foundations of the core concepts have to be advanced and unified, in order to be able to realize and better understand cognitive systems that exploit their embodiment in an autonomous and completely intrinsic way. Information theory, dynamical systems theory, and information geometry already turned out to be useful in this regard. However, there is much more, and also much more to do.
Summarizing, the goal of the workshop is to identify the core concepts and to advance the theoretical foundations of embodied intelligence.
I will review prime concepts of the field of embodied intelligence and their far-reaching implications for our understanding of intelligence. In order to incorporate these concepts within a unifying mathematical structure, I will propose a formal model of the sensori-motor loop. I will demonstrate its generality by addressing various important subjects of the field in a mathematically rigorous way and thereby outline corresponding research directions.
As a philosopher, I am interested in the relationship between body representation and the deep structure of self-consciousness. How, precisely, does one describe the grounding relations holding between different levels of embodiment? In analogy to the "symbol grounding problem" one might also call this the "self grounding problem", the problem of describing the principles as well as the mechanics by which a system's phenomenal self-model (PSM; cf. Metzinger 2003; 2007a) is anchored in low-level physical dynamics. My specific epistemic goal in this lecture will be the simplest form of phenomenal self-consciousness: What exactly are the essential non-conceptual, pre-reflexive layers in conscious self-representation? What constitutes a minimal phenomenal self? Conceptually, I will defend the claim that agency is not part of the metaphysically necessary supervenience-basis for bodily self-consciousness. Empirically, I will draw on recent research focusing on out-of-body experiences (OBEs) and full-body illusions (FBIs). I will then proceed to sketch a new research program and advertise a new research target: "Minimal Phenomenal Selfhood", ending with an informal argument for the thesis that agency or "global control", phenomenologically as well as functionally, is not a necessary condition for self-consciousness. Thomas Metzinger. Being no one: the self-model theory of subjectivity. A Bradford book. MIT Press, Cambridge, Mass., 2003.Thomas Metzinger. The ego tunnel: the science of the mind and the myth of the self. Basic Books, New York, 2009.Thomas Metzinger. Self Models. http://www.scholarpedia.org/article/Self_Models, Scholarpedia journal, 2 (10):4174, 2007.Bigna Lenggenhager, Tej Tadi, Thomas Metzinger, and Olaf Blanke.
Video ergo sum: manipulating bodily self-consciousness. Science, 317(5841):1096-1099, 2007.Olaf Blanke and Thomas Metzinger. Full-body illusions and minimal phenomenal selfhood. Trends in cognitive sciences, 13(1):7-13, 2008.Thomas Metzinger. Empirische Perspektiven aus Sicht der Selbstmodell-Theorie der Subjektivität: eine Kurzdarstellung mit Beispielen. 2012
The first part of the talk is about the paradigm of embodied cognition intelligence. Starting from the main paradigms of cognition, a unified paradigm of cognition will be proposed using insights from Jean Piaget, Charles Sanders Pierce and Francisco Varela. This unified paradigm of cognition fits the notion of Embodied Intelligence very well. In particular, the criterium of success for all kinds of embodied intelligence is viability. Then, important qualitative known facts about the neural network and its relation to the body will be used for proposing different paradigmatic levels of embodied intelligence.
The second part of the talk is thus proposing three mathematical models of embodied intelligence, the reactive, the emotional and the predictive embodied agent. The first mathematical model is about a pure reactive embodied agent constrained to remain in its viability domain under perturbations of the environment: it can be modelled as a random dynamical system in a constrained domain. The second mathematical model is enlarging the previous one with an emotional embodied agent able to use reinforcement learning for selecting its strategies without any model of its own sensorimotor dynamics: it can be modelled as an aged random dynamical inclusion in a viability tube, in the sense of viability theory. The third mathematical model is still enlarging the previous one with a predictive embodied agent able to construct models of its own sensorimotor dynamics with its uncertainties and dealing with the compromise exploration/exploitation for inventing new strategies in difficult and noisy environment.
The conclusion is summarizing these three levels of embodied intelligence, reactive, emotional and predictive. It is furthermore looking at the link between embodied intelligence and collective intelligence for these three levels.
Paradigm of Embodied IntelligenceCheryl J. Misak, editor. The Cambridge companion to Peirce. Cambridge University Press, Cambridge, U.K., 2004.Jean Piaget. Genetic epistemology. Norton, New York, 1971.Francisco J. Varela, Evan Thompson, and Eleanor Rosch. The embodied mind: cognitive science and human experience. MIT Press, Cambridge, Mass., 1991.Models of Embodied IntelligenceTeuvo Kohonen and Timo Honkela. Kohonen network. Scholarpedia journal, 2(1):1568, 2007.Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. Dissertation, King's College, 1989.Shun'ichi Amari and Hiroshi Nagaoka. Methods of information geometry, volume 191 of Translations of mathematical monographs. American Mathematical Society, Providence, RI, 2000.Jean-Pierre Aubin: Viability theory. Systems & control: foundations & applications. Birkhäuser, Boston, 1991.Boris Hasselblatt, editor. Handbook of dynamical systems, Elsevier, Amsterdam, 2002.
The brain sitting inside its bony cavity sends and receives myriads of sensory inputs and outputs. A problem that must be solved either in ontogeny or phylogeny is to extract the particular characteristics within this "blooming buzzing confusion" that signal the existence and nature of physical space, with structured objects immersed in it, among them the agent's body. We show how a biological (or artificial) agent with arbitrary sensors can discover the existence of one important aspect of space, namely rigid displacements, without any prior knowledge about the structure of its sensors, its body, or of the world. Following an idea of Henri Poincaré, the method involves examining the compensable relations between the sensorimotor contingencies linking sensory and motor variables. Once acquired, the notion of rigid displacement will allow the agent to manifest apparently spatial knowledge in its behaviours.
Embodied intelligence within manual interaction: multimodality, decomposition, recognitionAlexandra BarchunovaBielefeld University, Germany (joint work with Jan Moringen, Robert Haschke, Helge Ritter)For humans, manual interaction with the surrounding objects and its recognition is an essential cognitive ability, significant for survival. When we observe, how others interact with objects, we usually see continuous movements of the fingers accompanied in some cases by an acoustic noise. When we carry out a joint manual action, e.g. moving furniture, we also sense the pressure caused by force application of the interaction partners. Nevertheless, we are capable of integrating different sensory modalities, splitting the continuous low-level observations into chunks and assigning them to semantic categories, such as "grasping", "holding", "pouring", "cutting", or "shaking".
Motivated by the latest psychological and neuroscientific findings, in our work we pursue decomposition and recognition for multimodal bi-manual time series on a semantic level. The conceptual basis of our work is inspired by the Activity Theory presenting interaction on three levels of complexity: action primitives, actions and activities. Following the hotly debated question of identification of action primitives, we propose a two-stage approach.
In the first stage, inspired by the findings of Hemeren and Thill (2011) we conduct a decomposition of interaction into action primitives based on detection of change in multimodal data. To this end, we present the first application of a Bayesian algorithm for multiple change detection introduced by Fearnhead (2006) to decomposition of multimodal interaction time series into action primitives. For this purpose we propose an approach that integrates simple stochastic models (autoregressive, constant and threshold models), representing unimodal segments, to multimodal representation of action primitives. The great advantages of the proposed method are that it neither need any pre-training, nor action-specific template knowledge, nor interaction-specific segmentation heuristics. In the second step, we conduct supervised and unsupervised learning of the resulting action primitive segments based on ordered means models (Großekathöfer and Lingner, 2004).
Within the experimental scenario, the multimodal manual interaction data is represented by the applied force, audio signal and kinematic trajectories of the hand recorded during action execution for both hands. The sequence includes representative actions, such as "grasp", "hold", and "screw". In order to acquire ground truth automatically, in our work we present an alternative method to hand labeling of the observations, the audio-cue schedule.
Altogether, with the proposed method we aspire a generic approach to recognition of interaction, applicable in a wide range of scenarios, and integrating different modalities.Sensorimotor navigation and the role of embodiment in spatial memoryJosé R. DonosoBernstein Center for Computational Neuroscience (Berlin), GermanyOne of the aims of neuroethology is to pinpoint the neural structures and mechanisms implementing a specific function defined in the behavioral domain. Consistent with a cognitivist paradigm, mainstream research in spatial memory is founded on the idea of the acquisition of a cognitive map; a neural substrate that during exploration encodes the topological properties of the environment for subsequent retrieval. Such representation requires a process that allows other "modules" within the agent to make use of this information in order to plan current and future behavior. However, a biologically plausible read-out mechanism has remained elusive, making it difficult to connect the behavioral nature of spatial navigation with a plausible neural mechanism within a purely cognitivist framework. Here I show how simple principles of embodiment and sensorimotor associations can provide a bridge between the behavioral and biological levels. By means of a simple embodied-connectionist model, I illustrate how these concepts can account for the relatively complex behavior involved in a navigational task. I discuss the limitations of the model and possible extensions that could provide insights into the neural mechanisms underlying spatial memory under the light of up-to-date experimental findings in rodents.Some properties at the root of embodied intelligenceAndrée EhresmannUniversité de Picardie Jules Verne (Amiens), FranceWhat are the properties enabling a cognitive system to develop embodied intelligence? The problem is studied using the theory of Memory Evolutive Systems (Ehresmann & Vanbremeersch, 2007). MES give a mathematical model, based on Category Theory, for multi-scale systems with a tangled hierarchy of components varying over time; their dynamic is modulated by a network of internal agents with different rhythms and functions, with the help of a flexible long-term 'memory' allowing for learning and adaptation.
A Neuro-Bio System is represented by a MES which takes account of the different levels of the entire organism and of its biological, neural, cognitive and mental processes. This model points out 3 properties essential for embodied intelligence:a kind of 'flexible redundancy' (Multiplicity Principle);Synchronicity Laws to be respected by agents of different levels;formation of a central "Archetypal Core", which integrates an internal model of the organism and its environment, and acts as a driving force for developing embodied intelligence.An application is given to construct intelligent cognitive systems, in particular Neuro-Bio-ICT systems where a Neuro-Bio system is coupled with an artificial cognitive system ("Exocortex"), to enhance human capacities by integrating, self-structuring and exploiting multiple sources of information.Ammon (von) R., Ubiquitous Complex Event Processing (U-CEP), Submission to FET/Flagship 2010.Edelman, G.M. The Remembered Present; Basic Books: New York, NY, USA, 1989.Ehresmann, A.C.; von Ammon, R.; Iakovidis, D.K.; Hunter, A. Ubiquitous complex events processing in Exocortex applications and mathematical approaches, 2012.Ehresmann, A.C.; Vanbremeersch, J.-P. Memory Evolutive Systems: Hierarchy, Emergence,Cognition; Elsevier: Amsterdam, The Netherlands, 2007.Hagmann, P.; Cammoun, L.; Gigandet, X.; Meuli, R.; Honey, C.J.; Wedeen, Van J.; Sporns, O. Mapping the Structural Core of Human Cerebral Cortex. PLoS Biol. 2008, 6, 1479-1493.Kan, D.M. Adjoint Functors. Trans. Am. Math. Soc. 1958, 89, 294-329.Mac Lane, S. Categories for the working mathematician; Springer, 1971.Creativity and constraint in self-structuring systemsStefan LeijnenRadboud University Nijmegen, Netherlands (joint work with Pim Haselager)Under some definitions, creativity is an intrinsically unformalizable process. Yet, by aiming for a formal description of creativity, we address exactly those difficult problems that seemingly elope the current computational paradigm: the origins of structure, the nature of cognitive embodiment, and the relation between the signal of information and its object.
Creativity is often associated with freedom, unboundedness and the availability of a wide array of choices. Here, creativity is somewhat paradoxically defined as its apparent opposite: a process aimed towards incessantly generating constraints (Leijnen, 2011). In a process of self-limitation through self-organization, semi-stable structures arise, which in turn may affect the very same processes that underlie them (Juarrero, 1991; Gonzalez & Haselager, 2005; Deacon, 2012). In time, a higher-order loop may emerge, in which the system is no longer bounded by these self-limiting processes; rather, these (now creative) processes enable the invention of ever more varied and specialized structures. In this ongoing research project, the steps that build up to this hierarchical logic are analyzed, described, and will ultimately be formalized. Importantly - with respect to the embodiment paradigm - this approach allows informational concepts to arise up from physical constraints, and thereby forms an explanation for how cognition may come about, rather than assuming an already in-place structure.Deacon, T.W. (2012). Incomplete Nature: How Mind Emerged from Matter. New York: W. W. Norton and Company, 2012.Gonzalez, M.E.Q. & Haselager, W.F.G. (2005). Creativity: Surprise and abductive reasoning. Semiotica 153, 1/4, 325-341.Juarrero, A.J. (1999). Dynamics in Action: Intentional Behavior as a Complex System. Cambridge: MIT press.Leijnen, S. (2011). Thinking Outside the Box: Creativity in Self-Programming Systems. Workshop on Self-Programming in AGI Systems. Fourth Conference on Artificial General Intelligence, August 3-7, 2011, Mountain View, CA.Morphomotion: morphology independent locomotion controller for modular robotsAvinash RanganathUniversidad Carlos III de Madrid (Leganes), Spain (joint work with Luis Moreno Lorente)A locomotion gait in an animal, which comes about as a result of repetitive and coordinated movement of limbs/joints can be seen as a collection of oscillations, with phase relation between such oscillators determining the emerged gait. Similarly in a modular robotic organism, made up of several independent unit modules with 1 DOF each [1], a variety of locomotion gaits can be achieved by applying simple phase-differed sinusoidal oscillators to unit modules [2]. The phase difference between oscillating modules can either be predetermined [3], or modules could explicitly communicate among each other to converge to an optimal phase-relation [4]. Since a modular robot is an embodied system, made up of physically connected unit modules, there exists inter-modular (or intra-configuration) forces among modules in a given modular robotic configuration, which could be seen as implicit communication among modules. Using these forces, modules in a given configuration can converge and settle into a steady phase difference, resulting in a stable locomotion gait.
We have developed a distributed, homogeneous, adaptive, neural-controller for controlling unit modules, based on implicit inter-modular communication, resulting in stable locomotion gait [5]. The controller parameters are optimised using a Genetic Algorithm, individually for each of the five distinct modular robotic organisms we have experimented with. Adaptability of the controller can be determined by cross-evaluating controllers evolved for each organism on rest of the organisms. Cross-evaluation experiments, in most cases, resulted in stable locomotion gait closely resembling that of the organism's original locomotion gait, implying the influence of an organism's morphology on the emerged behaviour.http://www.iearobotics.com/wiki/index.php?title=M%C3%B3dulos_Y1Gonzalez-Gomez, J. November 2008. Modular Robotics and Locomotion: Application to Limbless robots. PhD thesis, EPS, UAM, Madrid, Spain.Gonzalez-Gomez, J., Boemo, E. September 2005. Motion of Minimal Configurations of a Modular Robot: Sinusoidal, Lateral Rolling and Lateral Shift. Proc. of the 8th International Conference on Climbing and Walking Robots, CLAWAR, London. pp. 667-674.Shen, W.-M., Salemi, B., Will, P. 2002. Hormone-inspired adaptive communication and distributed control for conro self-reconfigurable robots. IEEE Transactions on Robotics and Automation.A.Ranganath; J.González-Gómez; L.Moreno. Morphology Dependent Distributed Controller for Locomotion in Modular Robots. Proceedings of the Post-Graduate Conference on Robotics and Development of Cognition. Lausanne. Switzerland. Sep., 2012.Empowerment and state-dependent noise Christoph SalgeUniversity of Hertfordshire (Hatfield), United Kingdom (joint work with Cornelius Glackin and Daniel Polani)Empowerment offers a goal independent utility function based on the embodiment of an agent, and the dynamics of the world the agent is situated in. Recently we demonstrated that Empowerment in the continuous domain can be computed significantly faster if the world dynamics are approximated as multiple, co-dependent, linear Gaussian channels, assuming constant, state-independent Gaussian Noise. Modelling the channel as a more generic Gaussian Process, possibly obtained via a Gaussian Process Learner, now allows us to determine the actual noise levels for a specific state.
This allows new insights into the relationship to other agents, since co-inhabitation of a shared environment implies that several agents have an effect on the same environmental parameters. If the actions of another agent cannot be predicted they become a source of noise, reducing the empowerment. Empowerment maximisation then leads to interesting behaviour, such as avoiding collision with other agents, as the outcome is highly dependent on the other agent’s actions, and therefore is hard to predict.Information flow in a quadruped running robot quantified by transfer entropyNico SchmidtUniversity of Zurich, Switzerland (joint work with Matěj Hoffmann, Kohei Nakajima)Animals and humans engage in an enormous variety of behaviors which are orchestrated through a complex interaction of physical and informational processes. The physical interaction of the bodies with the environment is intimately coupled with informational processes in the animal’s brain. A crucial step toward the mastery of all these behaviors seems to be to understand the flows of information in the sensorimotor networks. In this study, we have performed a quantitative analysis in an artificial agent - a running quadruped robot with multiple sensory modalities - using tools from information theory (transfer entropy and its recently proposed decomposition). Starting from no prior knowledge, through systematic variation of control signals and environment, we show how the agent can discover the structure of its sensorimotor space. We propose that the agent could utilize this knowledge to: (i) drive learning of new behaviors; (ii) identify sensors that are sensitive to environmental changes; (iii) discover a primitive body schema.Visual exploration and predictive informationHenry SchützeUniversität zu Lübeck, GermanyThe autonomous exploration of the environment is a crucial behavioral task of autonomous robots. It has been shown that predictive information (PI) in sensor space is a useful measure in order to assess the quality of exploration behavior. In this work we employ PI to the exploration of static visual scenes, i.e. images. We model autonomous visual exploration by a small region of interest (ROI), which repositions itself in a larger image (the ROI defines the sensor and the repositioning the actuator). In contrast to, e.g., a simple two-wheel embodied robot, we did not assume a specific linear coupling between sensors and actuators. We present two simple behavioral rules, which generate a sequence of sensor values that, in one case, maximize and, in another case, minimize the PI in sensor space. However, both strategies lead to (practically) the same visual exploration behavior: on synthetic and natural images, regions containing rare sensorial configurations like edges and corners are more frequently visited than homogenous regions; a strategy that makes sense since edges and corners are more salient (i.e. they attract human gaze). This result is interesting since, in this scenario, the same behavior is characterized by completely different values of predictive information.Discovering rigid displacements by a naïve agentAlexander TerekhovUniversity of Pierre and Marie Curie (Paris), France (joint work with Kevin O'Regan)The laws of rigid displacements are the most basic and fundamental characteristics of spatial knowledge. Rigid displacements are implicitly assumed to be known to an agent in the majority of the existing algorithms of self-organization and calibration. But consider a naïve agent that stares at “the blooming buzzing confusion” of its sensory inputs and motor outputs and has no idea about the nature of the information the sensations carry - how can such an agent learn the laws of rigid displacements? In the current poster we give a partial answer to this question. Following Poincaré we assume that the key aspect of rigid displacements is that they represent laws that are shared between objects it perceives and the agent itself. As a consequence the agent can always perform an action that nullifies the changes of the sensory inputs caused by rigid displacements. We simulated an agent performing translational motions in a plane while looking at the starry skies above it. The agent has a retina with a few randomly placed photoreceptors with highly non-local Gaussian tuning curves. The agent can displace the retina without rotations and measure its position with randomly placed proprioceptive neurons, again having non-local Gaussian tuning curves. Without knowing it explicitly, the agent can learn the mappings of proprioception into itself, which in reality correspond to the rigid displacements. Using these mappings the agent can pass the most basic tests of spatial knowledge: (1) when shown two different patterns of stars the agent can say if this is the same pattern or not; (2) when displaced along two multi-segmental paths under different skies the agent can say if the end points of the two paths coincide (assuming that the starting points do). We conclude that the laws of rigid displacements can be learned by a naïve agent and that these laws allow the most basic aspects of spatial knowledge to be manifested.What can Friston's free-energy principle tell us about embodied cognition?Wanja WieseJohannes Gutenberg-Universität (Mainz), GermanyWell-known ways in which Friston's free-energy principle is related to embodied cognition (EC) include the propositions (i) that an agent embodies a model of its environment and of its own body as related to that environment [1]; plus (ii) that there is an intimate conceptual connection between action and perception [2]. The poster claims that the principle can contribute to research on EC in a more radical way: As the free-energy principle suggests that information-processing in the brain relies essentially on generative models, asking in which sense such models are representational possesses an even higher relevance for the explanatory force of EC, in particular regarding (i) the disputed need to refer to amodal mental representations, and (ii) the explanatory value of positing representations in general.
In order to establish that a generative model is representational in an interesting sense, it must on the one hand be shown that the content of a generative model can be determined in a way that allows for misrepresentation. Two such ways are provided by structural theories of representational content (as suggested by Andy Clark [3, p. 85]) and refined statistical theories (as proposed by Jakob Hohwy [4, ch. 8]).
On the other hand, it must also be the case that neurally implemented generative models actually fulfill a functional role that renders them representational. The main positive contribution of this poster is to show how the free-energy principle, by positing a hierarchical generative model in the brain, suggests a middle ground between representationalism and anti-representationalism in at least two respects:Neural populations towards the lower end of the hierarchy fulfill a role that renders them less representational, while populations towards the upper end tend to be representational in a more robust sense.By emphasizing the role of non-linear coupling between levels of the hierarchy, it is also suggested that any division between tasks that do not require representations and those that do is ultimately untenable.Friston, K. (2011). Embodied inference: or “I think therefore I am, if I am what I think”. In W. Tschacher & C. Bergomi (eds.), The Implications of Embodiment. Cognition and Communication. Imprint Academic. Friston, K., Daunizeau, J., Kilner, J., & Kiebel, S. (2010). Action and behavior: a free-energy formulation. Biological Cybernetics. doi: 10.1007/s00422-010-0364-zClark, A. (forthcoming). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral & Brain Sciences.Hohwy, J. (forthcoming). The Predictive Mind. Oxford: Oxford University Press.
Our hands belong to our most remarkable interfaces: they are the basis for getting "into touch with the world" and mastering the developmental steps that range from initial contact to skillful physical control of a huge variety of objects. This involves a rich shaping of physical contacts to enable active sensing, haptic recognition, purposeful manipulation, and ultimately emotional expression through gesture and physical contact. Most of these skills are very strongly shaped by the embodiment of our hand-arm-eye system and the pervading role of touch sensing for the control of physical contact patterns.
A deeper understanding of the "manual intelligence" manifested in these capabilities is likely to require the integration of insights from many disciplines. The present talk will provide a perspective biased from robotics and approach manual intelligence from the side of its technical synthesis: what does it require to replicate parts of manual intelligence on anthropomorphic robot hands, what insights can we gain from such attempts, and what are useful cross-connections with disciplines that range from physics and mathematics to brain science? Embodiment will be an overarching aspect, and we will take the view that manual intelligence is the exploitation of the interactional possibilities that arise from the encounter of two different embodiments: that of the hand-arm system, and that of the to-be-handled part of our environment. We will present a range of examples starting from simplified rigid-body situations and leading upward to manual skills involving the control even of non-rigid objects, such as the folding of paper. We will conclude with some challenges for future research.
Starting form Large Deviation Theory (Sanov's theorem) we can obtain the connection between the reward rate and the control and sensing information capacities, for systems in "metabolic information equilibrium" with stationary stochastic environments (Tishby & Polani, 2010). This result can be considered as an equilibrium characterisation for systems that achieved a certain value through interactions with the environment, but have no new learning (e.g. "stupid" cleaning robots). The affect of learning can be considered by revisiting the sub-extensivity of predictive information in stationary environments (Bialek, Nemenman & Tishby 2002) and combining it with the requirement of computational tractability of planning. We argue that planning is possible if the information flow terms remain proportional to the reward terms on the one hand, but still bounded by the sub-extensive predictive information on the other hand.
I will discuss the possible implications of this new computational principle to the emergence of hierarchical representations and discounting of rewards in our generalised Bellman equation.
Understanding the emergence of complex cognitive architectures during evolution, morphogenesis (evolution's "little sister") and life-time adaptation is faced with a dilemma: there is a quickly increasing understanding of the local rules that lead to the success of various schemes. However, the big picture of how and why these local processes work together seamlessly is still very murky.
In the last years, Shannon's information theory has demonstrated great value in characterizing constraints and bounds on decision making and cognitive tasks. This is partly due to the fact that decision making is subject to informational limitations which can be quantitatively characterized. It also relies on a high-level hypothesis that, in a quasi-equilibrium, and all other constraints kept equal, evolution tends to optimize an organism's cognitive processing in a suitable way.
Various incarnations of this generic idea are being studied, such as "information parsimony", "predictive information maximization", "empowerment maximization", or "compression progress" (the latter in a Kolmogorov setting). The talk will discuss a selection of such principles as to how they can provide a "top-down"-like understanding of the emergence of cognitive organizations which is not tied to a bottom-up understanding of the mechanisms which would implement them.
The field of embodied intelligence emphasises the importance of the morphology and environment with respect to the behaviour of a cognitive system. The contribution of the morphology to the behaviour, commonly known as morphological computation, is well-recognised in this community. We believe that the field would benefit from a formalisation of this concept as we would like to ask how much the morphology and the environment contribute to an embodied agent's behaviour, or how an embodied agent can maximise the exploitation of its morphology within its environment.
In my talk, I will present first steps towards a quantification of morphological computation in the context of embodied intelligence. I will propose various corresponding measures which are validated in simple experiments, thereby identifying their individual strengths and weaknesses. This is joint work with Nihat Ay.
Zahedi K, Ay N. Quantifying Morphological Computation, Entropy, 2013, 15(5):1887-1915.
Autonomy is a puzzling phenomenon in nature and a major challenge in the world of artifacts. A key feature of autonomy in both natural and artificial systems is seen in the ability for independent exploration. In animals and humans, the ability to modify its own pattern of activity is not only an indispensable trait for adaptation and survival in new situations, it also provides a learning system with novel information for improving its cognitive capabilities, and it is essential for development.
Efficient exploration in high-dimensional spaces is a major challenge in building learning systems. We propose to implement the exploration as a deterministic law derived from maximizing an information quantity. More specifically we use the predictive information of the sensor process (of a robot) to obtain an update rule (exploration dynamics) of the controller parameters. To be adequate in robotics application the non-stationary nature of the underlying time-series have to be taken into account, which we do by proposing the time-local predictive information (TiPI). Importantly the exploration dynamics is derived analytically and by this we link information theory and dynamical systems.
Without a random component the change in the parameters is deterministically given as a function of the states in a certain time window. For an embodied system this means in particular that constraints, responses and current knowledge of the dynamical interaction with the environment can directly be used to advance further exploration. Randomness is replaced with spontaneity which we demonstrate to restrict the search space automatically to the physically relevant dimensions. Its effectiveness will be presented with various experiments on high-dimensional robotic system and we argue that this is a promising way to avoid the curse of dimensionality. This talk describes joint work with Ralf Der and Nihat Ay.
Georg Martius, Ralf Der, and Nihat Ay. Information driven self-organization of complex robotic behaviors. Submitted, 2013, MPI MiS preprint 15/2013.
On the background of Evolutionary Robotics techniques certain aspects of neurodynamics in the sensorimotor loop are presented which are related to the observed behaviour of autonomous robots. Referring to standard discrete-time neural networks we will, among others, discuss the role of bifurcation theory and feedback control, dynamical equivalence and structural stability of neurodynamical systems, attractor morphing, behaviour representation by basins of attraction, and the possible role of chaos.
Embodied intelligence within manual interaction: multimodality, decomposition, recognitionAlexandra BarchunovaBielefeld University, Germany (joint work with Jan Moringen, Robert Haschke, Helge Ritter)For humans, manual interaction with the surrounding objects and its recognition is an essential cognitive ability, significant for survival. When we observe, how others interact with objects, we usually see continuous movements of the fingers accompanied in some cases by an acoustic noise. When we carry out a joint manual action, e.g. moving furniture, we also sense the pressure caused by force application of the interaction partners. Nevertheless, we are capable of integrating different sensory modalities, splitting the continuous low-level observations into chunks and assigning them to semantic categories, such as "grasping", "holding", "pouring", "cutting", or "shaking".
Motivated by the latest psychological and neuroscientific findings, in our work we pursue decomposition and recognition for multimodal bi-manual time series on a semantic level. The conceptual basis of our work is inspired by the Activity Theory presenting interaction on three levels of complexity: action primitives, actions and activities. Following the hotly debated question of identification of action primitives, we propose a two-stage approach.
In the first stage, inspired by the findings of Hemeren and Thill (2011) we conduct a decomposition of interaction into action primitives based on detection of change in multimodal data. To this end, we present the first application of a Bayesian algorithm for multiple change detection introduced by Fearnhead (2006) to decomposition of multimodal interaction time series into action primitives. For this purpose we propose an approach that integrates simple stochastic models (autoregressive, constant and threshold models), representing unimodal segments, to multimodal representation of action primitives. The great advantages of the proposed method are that it neither need any pre-training, nor action-specific template knowledge, nor interaction-specific segmentation heuristics. In the second step, we conduct supervised and unsupervised learning of the resulting action primitive segments based on ordered means models (Großekathöfer and Lingner, 2004).
Within the experimental scenario, the multimodal manual interaction data is represented by the applied force, audio signal and kinematic trajectories of the hand recorded during action execution for both hands. The sequence includes representative actions, such as "grasp", "hold", and "screw". In order to acquire ground truth automatically, in our work we present an alternative method to hand labeling of the observations, the audio-cue schedule.
Altogether, with the proposed method we aspire a generic approach to recognition of interaction, applicable in a wide range of scenarios, and integrating different modalities.Sensorimotor navigation and the role of embodiment in spatial memoryJosé R. DonosoBernstein Center for Computational Neuroscience (Berlin), GermanyOne of the aims of neuroethology is to pinpoint the neural structures and mechanisms implementing a specific function defined in the behavioral domain. Consistent with a cognitivist paradigm, mainstream research in spatial memory is founded on the idea of the acquisition of a cognitive map; a neural substrate that during exploration encodes the topological properties of the environment for subsequent retrieval. Such representation requires a process that allows other "modules" within the agent to make use of this information in order to plan current and future behavior. However, a biologically plausible read-out mechanism has remained elusive, making it difficult to connect the behavioral nature of spatial navigation with a plausible neural mechanism within a purely cognitivist framework. Here I show how simple principles of embodiment and sensorimotor associations can provide a bridge between the behavioral and biological levels. By means of a simple embodied-connectionist model, I illustrate how these concepts can account for the relatively complex behavior involved in a navigational task. I discuss the limitations of the model and possible extensions that could provide insights into the neural mechanisms underlying spatial memory under the light of up-to-date experimental findings in rodents.Some properties at the root of embodied intelligenceAndrée EhresmannUniversité de Picardie Jules Verne (Amiens), FranceWhat are the properties enabling a cognitive system to develop embodied intelligence? The problem is studied using the theory of Memory Evolutive Systems (Ehresmann & Vanbremeersch, 2007). MES give a mathematical model, based on Category Theory, for multi-scale systems with a tangled hierarchy of components varying over time; their dynamic is modulated by a network of internal agents with different rhythms and functions, with the help of a flexible long-term 'memory' allowing for learning and adaptation.
A Neuro-Bio System is represented by a MES which takes account of the different levels of the entire organism and of its biological, neural, cognitive and mental processes. This model points out 3 properties essential for embodied intelligence:a kind of 'flexible redundancy' (Multiplicity Principle);Synchronicity Laws to be respected by agents of different levels;formation of a central "Archetypal Core", which integrates an internal model of the organism and its environment, and acts as a driving force for developing embodied intelligence.An application is given to construct intelligent cognitive systems, in particular Neuro-Bio-ICT systems where a Neuro-Bio system is coupled with an artificial cognitive system ("Exocortex"), to enhance human capacities by integrating, self-structuring and exploiting multiple sources of information.Ammon (von) R., Ubiquitous Complex Event Processing (U-CEP), Submission to FET/Flagship 2010.Edelman, G.M. The Remembered Present; Basic Books: New York, NY, USA, 1989.Ehresmann, A.C.; von Ammon, R.; Iakovidis, D.K.; Hunter, A. Ubiquitous complex events processing in Exocortex applications and mathematical approaches, 2012.Ehresmann, A.C.; Vanbremeersch, J.-P. Memory Evolutive Systems: Hierarchy, Emergence,Cognition; Elsevier: Amsterdam, The Netherlands, 2007.Hagmann, P.; Cammoun, L.; Gigandet, X.; Meuli, R.; Honey, C.J.; Wedeen, Van J.; Sporns, O. Mapping the Structural Core of Human Cerebral Cortex. PLoS Biol. 2008, 6, 1479-1493.Kan, D.M. Adjoint Functors. Trans. Am. Math. Soc. 1958, 89, 294-329.Mac Lane, S. Categories for the working mathematician; Springer, 1971.Creativity and constraint in self-structuring systemsStefan LeijnenRadboud University Nijmegen, Netherlands (joint work with Pim Haselager)Under some definitions, creativity is an intrinsically unformalizable process. Yet, by aiming for a formal description of creativity, we address exactly those difficult problems that seemingly elope the current computational paradigm: the origins of structure, the nature of cognitive embodiment, and the relation between the signal of information and its object.
Creativity is often associated with freedom, unboundedness and the availability of a wide array of choices. Here, creativity is somewhat paradoxically defined as its apparent opposite: a process aimed towards incessantly generating constraints (Leijnen, 2011). In a process of self-limitation through self-organization, semi-stable structures arise, which in turn may affect the very same processes that underlie them (Juarrero, 1991; Gonzalez & Haselager, 2005; Deacon, 2012). In time, a higher-order loop may emerge, in which the system is no longer bounded by these self-limiting processes; rather, these (now creative) processes enable the invention of ever more varied and specialized structures. In this ongoing research project, the steps that build up to this hierarchical logic are analyzed, described, and will ultimately be formalized. Importantly - with respect to the embodiment paradigm - this approach allows informational concepts to arise up from physical constraints, and thereby forms an explanation for how cognition may come about, rather than assuming an already in-place structure.Deacon, T.W. (2012). Incomplete Nature: How Mind Emerged from Matter. New York: W. W. Norton and Company, 2012.Gonzalez, M.E.Q. & Haselager, W.F.G. (2005). Creativity: Surprise and abductive reasoning. Semiotica 153, 1/4, 325-341.Juarrero, A.J. (1999). Dynamics in Action: Intentional Behavior as a Complex System. Cambridge: MIT press.Leijnen, S. (2011). Thinking Outside the Box: Creativity in Self-Programming Systems. Workshop on Self-Programming in AGI Systems. Fourth Conference on Artificial General Intelligence, August 3-7, 2011, Mountain View, CA.Morphomotion: morphology independent locomotion controller for modular robotsAvinash RanganathUniversidad Carlos III de Madrid (Leganes), Spain (joint work with Luis Moreno Lorente)A locomotion gait in an animal, which comes about as a result of repetitive and coordinated movement of limbs/joints can be seen as a collection of oscillations, with phase relation between such oscillators determining the emerged gait. Similarly in a modular robotic organism, made up of several independent unit modules with 1 DOF each [1], a variety of locomotion gaits can be achieved by applying simple phase-differed sinusoidal oscillators to unit modules [2]. The phase difference between oscillating modules can either be predetermined [3], or modules could explicitly communicate among each other to converge to an optimal phase-relation [4]. Since a modular robot is an embodied system, made up of physically connected unit modules, there exists inter-modular (or intra-configuration) forces among modules in a given modular robotic configuration, which could be seen as implicit communication among modules. Using these forces, modules in a given configuration can converge and settle into a steady phase difference, resulting in a stable locomotion gait.
We have developed a distributed, homogeneous, adaptive, neural-controller for controlling unit modules, based on implicit inter-modular communication, resulting in stable locomotion gait [5]. The controller parameters are optimised using a Genetic Algorithm, individually for each of the five distinct modular robotic organisms we have experimented with. Adaptability of the controller can be determined by cross-evaluating controllers evolved for each organism on rest of the organisms. Cross-evaluation experiments, in most cases, resulted in stable locomotion gait closely resembling that of the organism's original locomotion gait, implying the influence of an organism's morphology on the emerged behaviour.http://www.iearobotics.com/wiki/index.php?title=M%C3%B3dulos_Y1Gonzalez-Gomez, J. November 2008. Modular Robotics and Locomotion: Application to Limbless robots. PhD thesis, EPS, UAM, Madrid, Spain.Gonzalez-Gomez, J., Boemo, E. September 2005. Motion of Minimal Configurations of a Modular Robot: Sinusoidal, Lateral Rolling and Lateral Shift. Proc. of the 8th International Conference on Climbing and Walking Robots, CLAWAR, London. pp. 667-674.Shen, W.-M., Salemi, B., Will, P. 2002. Hormone-inspired adaptive communication and distributed control for conro self-reconfigurable robots. IEEE Transactions on Robotics and Automation.A.Ranganath; J.González-Gómez; L.Moreno. Morphology Dependent Distributed Controller for Locomotion in Modular Robots. Proceedings of the Post-Graduate Conference on Robotics and Development of Cognition. Lausanne. Switzerland. Sep., 2012.Empowerment and state-dependent noise Christoph SalgeUniversity of Hertfordshire (Hatfield), United Kingdom (joint work with Cornelius Glackin and Daniel Polani)Empowerment offers a goal independent utility function based on the embodiment of an agent, and the dynamics of the world the agent is situated in. Recently we demonstrated that Empowerment in the continuous domain can be computed significantly faster if the world dynamics are approximated as multiple, co-dependent, linear Gaussian channels, assuming constant, state-independent Gaussian Noise. Modelling the channel as a more generic Gaussian Process, possibly obtained via a Gaussian Process Learner, now allows us to determine the actual noise levels for a specific state.
This allows new insights into the relationship to other agents, since co-inhabitation of a shared environment implies that several agents have an effect on the same environmental parameters. If the actions of another agent cannot be predicted they become a source of noise, reducing the empowerment. Empowerment maximisation then leads to interesting behaviour, such as avoiding collision with other agents, as the outcome is highly dependent on the other agent’s actions, and therefore is hard to predict.Information flow in a quadruped running robot quantified by transfer entropyNico SchmidtUniversity of Zurich, Switzerland (joint work with Matěj Hoffmann, Kohei Nakajima)Animals and humans engage in an enormous variety of behaviors which are orchestrated through a complex interaction of physical and informational processes. The physical interaction of the bodies with the environment is intimately coupled with informational processes in the animal’s brain. A crucial step toward the mastery of all these behaviors seems to be to understand the flows of information in the sensorimotor networks. In this study, we have performed a quantitative analysis in an artificial agent - a running quadruped robot with multiple sensory modalities - using tools from information theory (transfer entropy and its recently proposed decomposition). Starting from no prior knowledge, through systematic variation of control signals and environment, we show how the agent can discover the structure of its sensorimotor space. We propose that the agent could utilize this knowledge to: (i) drive learning of new behaviors; (ii) identify sensors that are sensitive to environmental changes; (iii) discover a primitive body schema.Visual exploration and predictive informationHenry SchützeUniversität zu Lübeck, GermanyThe autonomous exploration of the environment is a crucial behavioral task of autonomous robots. It has been shown that predictive information (PI) in sensor space is a useful measure in order to assess the quality of exploration behavior. In this work we employ PI to the exploration of static visual scenes, i.e. images. We model autonomous visual exploration by a small region of interest (ROI), which repositions itself in a larger image (the ROI defines the sensor and the repositioning the actuator). In contrast to, e.g., a simple two-wheel embodied robot, we did not assume a specific linear coupling between sensors and actuators. We present two simple behavioral rules, which generate a sequence of sensor values that, in one case, maximize and, in another case, minimize the PI in sensor space. However, both strategies lead to (practically) the same visual exploration behavior: on synthetic and natural images, regions containing rare sensorial configurations like edges and corners are more frequently visited than homogenous regions; a strategy that makes sense since edges and corners are more salient (i.e. they attract human gaze). This result is interesting since, in this scenario, the same behavior is characterized by completely different values of predictive information.Discovering rigid displacements by a naïve agentAlexander TerekhovUniversity of Pierre and Marie Curie (Paris), France (joint work with Kevin O'Regan)The laws of rigid displacements are the most basic and fundamental characteristics of spatial knowledge. Rigid displacements are implicitly assumed to be known to an agent in the majority of the existing algorithms of self-organization and calibration. But consider a naïve agent that stares at “the blooming buzzing confusion” of its sensory inputs and motor outputs and has no idea about the nature of the information the sensations carry - how can such an agent learn the laws of rigid displacements? In the current poster we give a partial answer to this question. Following Poincaré we assume that the key aspect of rigid displacements is that they represent laws that are shared between objects it perceives and the agent itself. As a consequence the agent can always perform an action that nullifies the changes of the sensory inputs caused by rigid displacements. We simulated an agent performing translational motions in a plane while looking at the starry skies above it. The agent has a retina with a few randomly placed photoreceptors with highly non-local Gaussian tuning curves. The agent can displace the retina without rotations and measure its position with randomly placed proprioceptive neurons, again having non-local Gaussian tuning curves. Without knowing it explicitly, the agent can learn the mappings of proprioception into itself, which in reality correspond to the rigid displacements. Using these mappings the agent can pass the most basic tests of spatial knowledge: (1) when shown two different patterns of stars the agent can say if this is the same pattern or not; (2) when displaced along two multi-segmental paths under different skies the agent can say if the end points of the two paths coincide (assuming that the starting points do). We conclude that the laws of rigid displacements can be learned by a naïve agent and that these laws allow the most basic aspects of spatial knowledge to be manifested.What can Friston's free-energy principle tell us about embodied cognition?Wanja WieseJohannes Gutenberg-Universität (Mainz), GermanyWell-known ways in which Friston's free-energy principle is related to embodied cognition (EC) include the propositions (i) that an agent embodies a model of its environment and of its own body as related to that environment [1]; plus (ii) that there is an intimate conceptual connection between action and perception [2]. The poster claims that the principle can contribute to research on EC in a more radical way: As the free-energy principle suggests that information-processing in the brain relies essentially on generative models, asking in which sense such models are representational possesses an even higher relevance for the explanatory force of EC, in particular regarding (i) the disputed need to refer to amodal mental representations, and (ii) the explanatory value of positing representations in general.
In order to establish that a generative model is representational in an interesting sense, it must on the one hand be shown that the content of a generative model can be determined in a way that allows for misrepresentation. Two such ways are provided by structural theories of representational content (as suggested by Andy Clark [3, p. 85]) and refined statistical theories (as proposed by Jakob Hohwy [4, ch. 8]).
On the other hand, it must also be the case that neurally implemented generative models actually fulfill a functional role that renders them representational. The main positive contribution of this poster is to show how the free-energy principle, by positing a hierarchical generative model in the brain, suggests a middle ground between representationalism and anti-representationalism in at least two respects:Neural populations towards the lower end of the hierarchy fulfill a role that renders them less representational, while populations towards the upper end tend to be representational in a more robust sense.By emphasizing the role of non-linear coupling between levels of the hierarchy, it is also suggested that any division between tasks that do not require representations and those that do is ultimately untenable.Friston, K. (2011). Embodied inference: or “I think therefore I am, if I am what I think”. In W. Tschacher & C. Bergomi (eds.), The Implications of Embodiment. Cognition and Communication. Imprint Academic. Friston, K., Daunizeau, J., Kilner, J., & Kiebel, S. (2010). Action and behavior: a free-energy formulation. Biological Cybernetics. doi: 10.1007/s00422-010-0364-zClark, A. (forthcoming). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral & Brain Sciences.Hohwy, J. (forthcoming). The Predictive Mind. Oxford: Oxford University Press.
Within cognitive science, there has been considerable debate about the relative merits of information-processing vs. dynamical approaches to understanding cognitive processes. This talk will adopt the position that the mathematical theories underlying these two approaches - information theory and dynamical systems theory, respectively - are best viewed as distinct mathematical lenses through which one can examine the operation of any system of interest. Thus, the concern should not be which approach to cognition is "right", but rather the different sorts of explanations that each lens reveals and the interrelationships between these explanations when both lenses are applied to the same cognitive system.
In order to explore these issues, I will describe the analysis of a model agent evolved to solve a relational categorization task. In this task, an agent presented with two falling objects of different sizes in sequence must catch the second object if it is smaller than the first and avoid it otherwise. Interestingly, both largely disembodied and strongly embodied strategies evolve to make this relational judgement. After a brief review of separate dynamical and information-theoretic analyses of the operation of these agents, I will focus on examining the ways in which these two explanations connect. This talk describes joint work with Paul Williams.
Our fast, deep/recurrent neural networks have many biologically plausible, non-linear processing stages. They won eight recent international pattern recognition competitions, and set records in many vision benchmarks, contributing to the ongoing second Neural Network ReNNaissance. We are starting to use them in active, unsupervised, curious, creative systems of a type we introduced in 1990. They learn to sequentially shift attention towards informative inputs, not only solving externally posed tasks, but also their own self-generated tasks designed to improve their understanding of the world according to our Formal Theory of Fun and Creativity, which requires two interacting modules: an adaptive (possibly neural) predictor or compressor or model of the growing data history as the agent is interacting with its environment, and a (possibly neural) reinforcement learner.The learning progress of (1.) is the FUN or intrinsic reward of (2.). That is, (2.) is motivated to invent skills leading to interesting or surprising novel patterns that (1.) does not yet know but can easily learn (until they become boring). We discuss how this simple principle explains science & art & music & humor. Time permitting, I'll also briefly discuss the recent theoretically optimal universal problem solvers pioneered in our lab, such as Gödel machines and the asymptotically fastest algorithm for all well-defined problems.
Overview pages with papers on the touched subjects:
http://www.idsia.ch/~juergen/creativity.html
http://www.idsia.ch/~juergen/interest.html
http://www.idsia.ch/~juergen/vision.html
http://www.idsia.ch/~juergen/handwriting.html
http://www.idsia.ch/~juergen/rnnbook.html
http://www.idsia.ch/~juergen/rl.html
http://www.idsia.ch/~juergen/evolution.html
http://www.idsia.ch/~juergen/ica.html
http://www.idsia.ch/~juergen/unilearn.html
http://www.idsia.ch/~juergen/goedelmachine.html
http://www.idsia.ch/~juergen/videos.html
How much about our interactions with - and experience of - our world can be deduced from basic principles? This talk reviews recent attempts to understand the self-organised behaviour of embodied agents, like ourselves, as satisfying basic imperatives for sustained exchanges with the environment. In brief, one simple driving force appears to explain many aspects of perception, action and the perception of action.
This driving force is the minimisation of surprise or prediction error that - in the context of perception - corresponds to Bayes-optimal predictive coding (that suppresses exteroceptive prediction errors) and - in the context of action - reduces to classical motor reflexes (that suppress proprioceptive prediction errors). In what follows, we look at some of the phenomena that emerge from this single principle; such as the perceptual encoding of spatial trajectories that can both generate movement (of self) and recognise the movements (of others). These emergent behaviours rest upon prior beliefs about itinerant states of the world - but where do these beliefs come from?
We will focus on recent proposals about the nature of prior beliefs and how they underwrite the active sampling of a spatially extended sensorium. Put simply, to minimise surprising states of the world, it is necessary to sample inputs that minimise uncertainty about the causes of sensory input. When this minimisation is implemented via prior beliefs - about how we sample the world - the resulting behaviour is remarkably reminiscent of searches of the sort seen in exploration or measured, in visual searches, with saccadic eye movements.
The evolution of cognition is often presented as a series of anatomical and inferential adaptations within a lineage: vision, motor control etc. Here I shall review how the environment can be used to outsource mental representations and used to simplify problems of inference. This "exbodied cognition" is widespread in nature, but reaches an extreme form in our own species, without which we would have evolved no material culture. To speak of human cognition without written symbols, and increasingly, mechanical mental buttresses such a calculating devices, is to ignore the most prominent feature of H. Sapiens' evolutionary and cultural history.