Search

Workshop

Latching dynamics as a model of recursion in language and beyond

  • Alessandro Treves (SISSA Trieste, Trieste, Italy)
G3 10 (Lecture hall)

Abstract

I will report research by Emilio Kropff and myself.

We study a Potts neural network as a model of the architecture and function of large cortical networks in the mammalian brain [1]. The basic assumption is that hebbian associative plasticity informs connections both within local networks and at long-range synapses [2]. Such large-scale associative networks are proposed to participate in higher order processes including semantic memory and language [3,4,5]. We discuss the modified Hebb rule that allows to deal with catastrophic overload, and estimatethe network storage capacity in the presence of strong correlations. We then study the latching phenomenon - the capacity of such a network to spontaneously generate arbitrarily long and complex sequences hopping from one memory retrieval state to the next, based on their correlation [6]. We show that the complexity in the sequence of attractors can be controlled by an effective threshold parameter. The transition between recursive and non recursive systems appears to scale with the number of local attractor states, suggestive of a spontaneous evolution of infinite recursion in human cognition.

[1] Kropff E and Treves A, J. Stat. Mech. P08010 (2005)
[2] Braitenberg V and Schuz A, Anatomy of the Cortex: Statistics and Geometry (Berlin: Springer, 1991)
[3] Pulvermuller F, Prog. Neurobiol. 67:85 (2002)
[4] McRae K, de Sa V and Seidemberg M, J. Exp. Psychol. General 126:99 (1997)
[5] Tyler LK et al, The neural representation of nouns and verbs: PET studies, Brain, 124:1619 (2003)
[6] Treves A, Cogn. Neuropsychol. 6:101 (2005)

Antje Vandenberg

Max-Planck-Institut für Mathematik in den Naturwissenschaften Contact via Mail

Jürgen Jost

Max-Planck-Institut für Mathematik in den Naturwissenschaften

Henry Tuckwell

Max-Planck-Institut für Mathematik in den Naturwissenschaften