How does the brain encode episode? Episodic memory has become interested in scientific society since finding of malfunction of formation of episodic memory caused by a damage of the hippocampus, especially, the part named CA1, which was clinically observed in the 1950 fs and 80 fs. On the other hand, simple memory has been explored in various contexts, especially, since Marr fs theory for archecortex (inc. the hippocampus), where Marr considered the hippocampus, especially, another part named CA3 to be responsible for associative memory. However, a conventional mathematical model of associative memory guaranteed only a single association in case without any given rule for the order of successive association. We hypothesize that it stems from the lack of inhibitory neuron. Actually, we obtain a successive association of stored patterns, which can be regulated by emergent chaotic activity of neural networks. A detailed observation of the architecture of CA3 ensures the presence of inhibitory neurons together with recurrent connections of excitatory neurons, the latter of which are necessary for a single association. We also made a model for CA1 which has much less recurrent networks, but internal connections of excitatory and inhibitory neurons. We found a Cantor set in the output of CA1 neurons and clarified the functional significance of this set in relation to episodic memory. Our hypothesis is that CA1 is responsible for the formation of episodic memory in the form of Cantor coding of temporal patterns. Furthermore, to observe the Cantor set in real brain we conducted an experiment, using the rat hippocampal slice. We observed the Cantor-like sets and affine transformations in the data, which indicate the IFS-like mechanism can actually work in the process of episodic memory formation.
Elman network is a descrete-time 3 layer neural network which is able to realize stochastic finite-state automaton. In this study, chaotic neuron model proposed by K.Aihara is introduced in its hidden layer, namely the chaotic Elman network. The chaotic Elman network includes the ordinal Elman network as a special case. Choosing parameters, the dynamics shows chaotic itinerancy among stored patterns. I will first show the analysis using invariant subspaces, explaining the chaotic itinerancy as crisis-induced intermittency of periodic orbit. Second, We consider the case where Hebbian learning is added during the itinerancy. Using only the local information of neuron dynamics, the network can converge attractor basins, simplify and modify the hierarchical structure of invariant subspaces. This implies an analogy of dialectic, which escapes from formal logic.
I would also talk on some intuitional view to analize chaotic itinerancy using information geometry, inspired by Nihat Ay's work.
We view information flow as a dynamical process on networks, and discuss the convergence (or lack thereof) to equilibrium states. Based on graph theory and dynamical systems concepts, we present a common framework that allows the analysis of several problems, such as random walks, ergodicity, synchronization, opinion spread, etc. Furthermore, this approach reveals some unifying principles that underlie seemingly different processes. Finally, we discuss the role of information transmission delays in dynamics.
Mathematical information theory provides an important framework for understanding cognitive processes. It has been successfully applied to neural systems displaying feed-forward structures. It turns out that the analysis of recurrent structures is more subtle. This is mainly due to the fact that corresponding information-theoretic quantities allow for the ambiguity of their causal and associational interpretations. In order to understand information flows in recurrent networks, one has to make a clear distinction between these two interpretations. In collaboration with Daniel Polani we addressed this problem using a causality theory developed by Judea Pearl and his coworkers. I will discuss some possible applications of this work to complexity theory.
When many information source adaptively interact in a network, it is a natural question what is flow of information. We present an extended form of game dynamics for adaptively interacting Markovian processes and study its dynamics to address the problem of information flow. Examples of adaptive dynamics for two interacting biased coin tossing processes are exhibited.
Information is an essential and omnipresent resource and has long been suspected as a major factor shaping the emergence of intelligence in animals and as a guideline to construct artificial intelligent systems. In search for fundamental principles guiding the self-organization of neural networks, Linsker (1988) formulated a number of information-theoretic hypotheses. His model (and most of its successors) was purely passive. However, recent work by Touchette and Lloyd (2000) extending early work by Ashby (1953), as well as some work by Polani et al. (2001) has shown that actions can be incorporated into the information- theoretical analysis.
As was found by Klyubin et al. (2004), incorporating actions into an information-theoretic formalization of the perception action-loop of agents has dramatic consequences in terms of self-organization capabilities of the processing system. As opposed to Linsker's model which required some significant pre-structuring of its neural network, this new model makes only minimal assumptions about the information processing architecture. The agent's "embodiment", i.e. the coupling of its sensors and actuators to the environment, is sufficient to give rise to structured pattern detectors driven by optimization principles applied to the information flow in the system.
In the present talk, we will motivate Shannon information as a primary resource of information processing, introduce a model which allows to consider agents purely in terms of information and show how this model gives rise to the aforementioned observations. If there is time, the talk will discuss the use of information-theoretic methods to structure the information processing also in real robot systems.