The principle of least effort in communications has been shown, by Ferrer i Cancho and Sol´e, to explain emergence of power laws (e.g., Zipf’s law) in human languages. This study sharpens the results of Ferrer i Cancho and Sol´e by explicitly solving the problem. The extended model contrasts Zipf’s law found in the vicinity of the transition between referentially useless systems and indexical reference systems, with a logarithmic law found at the transition. Arranging the codes according to a logarithmic law is observed to be the most representative optimal solution for maximising the referential power under the effort constraints. We also extend the principle and the information-theoretic model to multiple coding channels.
The Monte Carlo method is a powerfull instrument to analyze complex systems. It has become a standard tool in almost all fields of science ranging from high energy physics to sociology. Its overall success conceals that the whole method is based on shaky grounds: our ability to generate random numbers. In this talk I will review one of the most spectacular failures of the Monte Carlo method in statistical mechanics, the "Ferrenberg affair" from 1992, and how it has been resolved only recently. This and other disasters teach us that the manufacturing of randomness needs to be promoted from black art to transparent science.
In this talk we present a multivariate method for the analysis of interrelations between data channels of a $M$-dimensional recording. We describe in detail how and in which sense genuine multivariate features of the data set are extracted and illustrate the performance of the method with the help of numerical examples. As tools known from Random Matrix Theory are used, a brief overview of the origin of this field is given and some technical aspects of the calculation of RMT-measures are discussed. Finally we present several examples where this method has been applied successfully to he analysis of electroencephalographic recordings of epileptic patients.
In this talk I first review the fact that all physical devices that perform observation, prediction, or recollection share an underlying mathematical structure. Devices with that structure are called ``inference devices''. I then present new existence and impossibility results concerning inference devices. These results have close connections with the mathematics of Turing Machines (TM's), e.g., some of the impossibility results for inference devices are related to the Halting theorem for TM's. Furthermore, one can define an analog of Universal TM's (UTM's) for inference devices, called ``strong inference devices''. Strong inference devices can be used to define the ``inference complexity'' of an inference task, which is the analog of the Kolmogorov complexity of computing a string. Whereas the Kolmogorov complexity of a string is arbitrary up to specification of the UTM, there is no such arbitrariness in the inference complexity of an inference task. I present some new results bounding inference complexity. Next I present some new graph-theoretic properties that govern any set of multiple inference devices. After this I present an extension of the framework to address physical devices that are used for control. I end with an extension of the framework to address probabilistic inference.
Joint work between Timo Ehrig and Louis H. Kauffman
This lecture discusses and demonstrates a model that maps debates. In a system theoretical perspective, the predictions of actors and mathematical models actors use to predict are part of the system that is attempted to be predicted. This insight gave rise to work in complex systems science to model stock market situations using inductive decision rules in computer simulations (Arthur/ Holland et al 1993). The framing landscapes model aims at mapping the logical/ cognitive structure of future estimates in real-world situations. Future estimates can only be declared as true/ false in hindsight and it is more precise to say that they were successful or failed, as they are performative and actively influence the future. The performative dimension of debates is core to the model that we present. This modeling is not at the level of quantitative models such as the Black-Scholes model, but rather at the level of making an image of on-going debates that have an effect on the evolution of the market itself. It begins in the web of uncertain and fluctuating propositions of the debate, takes a dynamic mathematical snapshot of this web, and allows the snapshot to evolve to extrema of a potential function that we call its fixed points. The snapshots and the results of its evolution then amplify and effect the original debate and give information about the structure of the market at that time. Our current work is on the mathematic foundations of the model and we did not yet test the model in an empirical setting, but we will present an outline of an empirical experiment that will follow soon.
An important problem in game theory is how to explain bounded rationality in general, and non-kin altruism in particular. Previous explanations have involved computational limitations on the players, repeated plays of the same game among the players, signaling among the players, networks of which players play with one another, etc. As an alternative I show how a simple extension to any conventional non-repeated game can make bounded rationality and/or non-kin altruism be utility-maximizing in that game, even for a computationally unlimited player.
Say we have a game gamma with utility functions {u_j}. Before playing gamma have the players play a "persona" game Gamma. Intuitively, in Gamma the move of each player i is a choice of a utility function u'_i to replace u_i when she plays gamma. The objective of player i in Gamma is to pick such a "persona" u'_i so that when gamma is played with the original utility functions replaced by the personas chosen by all the players in Gamma, the resultant Nash equilibrium maximizes expected u_i.
In certain cases, such an optimal u'_i differs from u_i. In these cases, player i's adopting the "bounded rationality" and/or "altruism" of persona u'_i actually maximizes expected u_i. As particular illustrations, we show how such persona games can explain some experimental observations concerning the prisoner's dilemma, the ultimatum game, and the traveler's dilemma game. We also show how phase transitions arise in some persona games, and discuss the possible implications of persona games for evolutionary biology, for the concept of social intelligence, and for distributed control of systems of systems.
Piece-wise affine maps is a nice approximation to nonlinear dynamical systems. Seemingly simple they can possess non-trivial complexity. In this talk I discuss entropy as a fine measure of complexity. I will report some new formulas and inequalities for topological and measure entropy. As an application I consider Zhang model of self-organized criticality, a popular phenomenon to explain an array of physical phenomena. Collapsing of the entropy in thermodynamic limit is closely related to subexponential decay of correlations (criticality) of avalanche observables.
The lecture is mostly based on a collaboration with M.Rypdal.
We are interested in the possibility that quantum systems, such as molecules, store and process information. As a first step in exploring this, we introduce a class of quantum finite-state automata. To illustrate the power of these models we analyze several prototype quantum processes, emphasizing the difference between physical and computation-theoretic views of quantum behavior. The quantum automaton analysis reveals structure in behavior that the physical description fails to detect. We also compare the relative generative capabilities of quantum and classical systems.
http://arXiv.org/abs/quant-ph/0608206 .
Around the turn of the 20th century, Poincaré formulated the idea of studying nature via the qualitative, geometric study of spaces of mappings we use to model nature. Since then, much of mathematical dynamics as well as nonlinear dynamics in many applied fields has worked to achieve partial solution to this problem. In this talk I will begin by discussing a construction that provides a means of attacking Poincaré's original problem. In a practical way, this goal will be achieved using a function space (neural networks) that admits a measure. Using the chosen function space, a Monte Carlo analysis relative to this measure of the macroscopic geometric features will be presented. In particular, the geometric quantification will consist of analyzing a function that measures the number of positive Lyapunov exponents (and hence expanding directions) with parameter variation. This function is then rescaled to remove a dependence on dimension and the number of parameters such that an analysis can be performed in the asymptotic limit of a large number of dimensions.
Random set graphs form a natural class of networks with high local clustering. Basic properties of set graphs and several applications will be presented. In the second half of the talk threshold properties of generalized epidemic processes on various random graph spaces will be discussed.
A significant challenge in condensed matter science is the discovery and characterization of structure in complex, disordered materials directly from their x-ray diffraction spectra. A broad class of layered materials, called polytypes, can exist in a wide range of both ordered and disordered stacking configurations. Examples of polytypes include micas and kaolins, and substances of technological importance, such as the wide band gap semiconductor silicon carbide. While standard crystallographic techniques can identify most ordered stacking structures, understanding the diffuse diffraction spectra arising from disordered specimens has proven more challenging. In this talk, I will briefly discuss the phenomenon of polytypism at a level suitable for a general scientific audience. I will introduce a novel technique for detecting and characterizing disordered stacking structure directly from x-ray diffraction spectra. The resulting expression for the structure is a directed graph. I will demonstrate the technique on x-ray diffraction spectra obtained from zinc sulphide crystals and show how it provides insight into the complex stacking structure of these crystals as well as allows for the calculation of material properties of physical import. The techniques introduced here are quite general, and are applicable to the problem of inferring structure (either spacial or temporal) given an experimental signal in the form of a power spectrum.
We derive a class of macroscopic differential equations that describe collective adaptation, starting from a discrete-time stochastic microscopic model. The behavior of each agent is a dynamic balance between adaptation that locally achieves the best action and memory loss that leads to randomized behavior. We show that, although individual adaptive agents interact with their environment and other agents in a purely self-interested way, macroscopic behavior can be interpreted as game dynamics. Application to several explicit interactions shows that the adaptation dynamics exhibits a diversity of collective behaviors. We also analyze the adaptation dynamics in terms of information theory, giving a novel view for collective adaptation.
This seminar will begin with a discussion of a sequence of recent results regarding our work with high-dimensional scalar neural networks. For convenience of analysis the neural networks can be partitioned via common dynamic types according to a control parameter. This stratification will be introduced, followed by a discussion of results regarding probable bifurcations from fixed points in both general dynamical systems and the specific neural networks we are employing. The discussion will also focus on a series of scaling laws that have been discovered with respect to an increase in the number of parameters and input dimensions of the neural networks. Finally, the seminar will include a discussion how these scaling laws can be used to understand topological variation with parameter change in the neural networks as the number of parameters and input dimensions becomes large.