Computational history of chemistry is an emerging field leveraging the rather stable ontology of chemistry represented by its substances and reactions, which constitute the chemical space. Over more than two centuries, chemists have annotated and curated information on the chemical space in a systematic manner, which is today available at our fingertips thanks to digitisation efforts. Here we show results of our explorations of the chemical space between 1800 and the present day. We found that chemists have expanded the space at an stable exponential rate, where every 16 years the number of new chemicals is doubled. The stability of this expansion has not even been affected in the long term by social setbacks such as world wars. By using time series analysis methods, we found that the expansion of the space has occurred through the sequence of three statistical regimes. Each regime is characterised by a stable variability in the number of new reported chemicals. Taken in chronological order, regime after regime the variability drops, indicating a regularisation of the discovery process in chemistry. The three statistical regimes are clearly demarcated by two drastic transitions, one about 1860 and the other in 1980. The first transition was strongly related to the development and adoption of molecular structural theory and the second one is presumably related to a technological shift allowing for the synthesis of by far more complex compounds. We also found chemists have followed a simple rule of combination of chemicals to expand the chemical space, which we dubbed the fixed-substrate approach. It is characterised by the combination of no more than three starting materials in chemical reactions such that one of the substances is a well-known chemical, while the rest of the combined substances are rather new substances in the chemical space. By focusing on the recent unfolding of the chemical space, we analysed the role of different countries and traced back the surge of China in the early 2000s as the hegemon of the chemical space. We also explored the details of the unfolding of the rare-earth subspace, which is strongly associated to the widespread current electronic technologies. Our results indicate that China is today not only the largest producer of rare earths, but also the country producing more knowledge on these chemicals and on the chemical space, in general.

When we turn on a radio or stream a playlist, we can usually recognize the instrument we hear, whether it’s a cello, a guitar, or an operatic voice. Such fidelity was not always true of radio. This lecture shows how the problem of broadcast fidelity pushed German scientists beyond the traditional bounds of their disciplines and led to the creation of one of the most important electronic instruments of the twentieth century.
In the early days of radio, acoustical distortions made it hard for even the most discerning musical ears to differentiate instruments and voices. The physicists and engineers of interwar Germany, with the assistance of leading composers and musicians, tackled this daunting technical challenge. Research led to the invention in 1930 of the trautonium, an early electronic instrument capable of imitating the timbres of numerous acoustical instruments and generating novel sounds for many musical genres. The talk charts the broader political and artistic trajectories of this instrument, tracing how it was embraced by the Nazis and subsequently used to subvert Nazi aesthetics after the war. It even became an important instrument for Hollywood and German films and television commercials.

Webs are graph-like objects appearing at the intersection of low dimensional geometry & topology, combinatorics, representation theory, and mathematical physics. These lectures will introduce webs and some of the tantalizing questions about them. We will focus on the mathematics related to the Lie group $SL(n)$ of n-by-n matrices with determinant one, putting emphasis on the cases n=2 and n=3. No prerequisites are expected.

Webs are graph-like objects appearing at the intersection of low dimensional geometry & topology, combinatorics, representation theory, and mathematical physics. These lectures will introduce webs and some of the tantalizing questions about them. We will focus on the mathematics related to the Lie group $SL(n)$ of n-by-n matrices with determinant one, putting emphasis on the cases n=2 and n=3. No prerequisites are expected.

Webs are graph-like objects appearing at the intersection of low dimensional geometry & topology, combinatorics, representation theory, and mathematical physics. These lectures will introduce webs and some of the tantalizing questions about them. We will focus on the mathematics related to the Lie group $SL(n)$ of n-by-n matrices with determinant one, putting emphasis on the cases n=2 and n=3. No prerequisites are expected.

A cylinder will roll down an inclined plane in a straight line. A cone will wiggle along a circle on that plane and then will stop rolling. We ask the inverse question: For which curves drawn on the inclined plane $\real^2$ can one chisel a shape that will roll downhill following precisely this prescribed curve and its translationally repeated copies?This is a nice, and easy to understand problem, but the solution is quite interesting.(Based on work mostly with Y. Sobolev and T. Tlusty. After a Nature paper, Solid-body trajectoids shaped to roll along desired pathways, August 2023.)

High-dimensional inference problems involving heterogeneous pairwise observations arise in a variety of applications, including covariance estimation, clustering, and community detection. In this talk I will present a unified approach for the analysis of these problems that yields exact formulas for both the fundamental and algorithmic limits. The high-level idea is to model the observations using a linear Gaussian channel whose input is the tensor product of the latent variables. The limits of this general model are then described by a finite-dimensional variational formula, which provides a decoupling between the prior information about the latent variables (usually a product measure) and the specific structure of the observations. I will also discuss some recent results on computationally efficient methods based on approximate message passing.

In recent years there is flurry of activity around singularity formation in PDEs of fluid dynamics. It culminated in 2022 in computer-assisted proofs of singularity forming from smooth data for incompressible Euler equation by Chen-Hou, and related numerics for Navier-Stokes equations.
It is thus quite possible that the train to millennium prize is leaving soon, and we want at least to understand how its engine works. Our first station will be a careful reading of the milestone work of Elgindi. Then we can go towards understanding analytical part of Chen-Hou.

In recent years there is flurry of activity around singularity formation in PDEs of fluid dynamics. It culminated in 2022 in computer-assisted proofs of singularity forming from smooth data for incompressible Euler equation by Chen-Hou, and related numerics for Navier-Stokes equations.
It is thus quite possible that the train to millennium prize is leaving soon, and we want at least to understand how its engine works. Our first station will be a careful reading of the milestone work of Elgindi. Then we can go towards understanding analytical part of Chen-Hou.

In recent years there is flurry of activity around singularity formation in PDEs of fluid dynamics. It culminated in 2022 in computer-assisted proofs of singularity forming from smooth data for incompressible Euler equation by Chen-Hou, and related numerics for Navier-Stokes equations.
It is thus quite possible that the train to millennium prize is leaving soon, and we want at least to understand how its engine works. Our first station will be a careful reading of the milestone work of Elgindi. Then we can go towards understanding analytical part of Chen-Hou.

In recent years there is flurry of activity around singularity formation in PDEs of fluid dynamics. It culminated in 2022 in computer-assisted proofs of singularity forming from smooth data for incompressible Euler equation by Chen-Hou, and related numerics for Navier-Stokes equations.
It is thus quite possible that the train to millennium prize is leaving soon, and we want at least to understand how its engine works. Our first station will be a careful reading of the milestone work of Elgindi. Then we can go towards understanding analytical part of Chen-Hou.

In recent years there is flurry of activity around singularity formation in PDEs of fluid dynamics. It culminated in 2022 in computer-assisted proofs of singularity forming from smooth data for incompressible Euler equation by Chen-Hou, and related numerics for Navier-Stokes equations.
It is thus quite possible that the train to millennium prize is leaving soon, and we want at least to understand how its engine works. Our first station will be a careful reading of the milestone work of Elgindi. Then we can go towards understanding analytical part of Chen-Hou.

In recent years there is flurry of activity around singularity formation in PDEs of fluid dynamics. It culminated in 2022 in computer-assisted proofs of singularity forming from smooth data for incompressible Euler equation by Chen-Hou, and related numerics for Navier-Stokes equations.
It is thus quite possible that the train to millennium prize is leaving soon, and we want at least to understand how its engine works. Our first station will be a careful reading of the milestone work of Elgindi. Then we can go towards understanding analytical part of Chen-Hou.

In recent years there is flurry of activity around singularity formation in PDEs of fluid dynamics. It culminated in 2022 in computer-assisted proofs of singularity forming from smooth data for incompressible Euler equation by Chen-Hou, and related numerics for Navier-Stokes equations.
It is thus quite possible that the train to millennium prize is leaving soon, and we want at least to understand how its engine works. Our first station will be a careful reading of the milestone work of Elgindi. Then we can go towards understanding analytical part of Chen-Hou.

In recent years there is flurry of activity around singularity formation in PDEs of fluid dynamics. It culminated in 2022 in computer-assisted proofs of singularity forming from smooth data for incompressible Euler equation by Chen-Hou, and related numerics for Navier-Stokes equations.
It is thus quite possible that the train to millennium prize is leaving soon, and we want at least to understand how its engine works. Our first station will be a careful reading of the milestone work of Elgindi. Then we can go towards understanding analytical part of Chen-Hou.

We consider a penalty framework based on regularizing the squared distance to set-based constraints for several core statistical tasks. These distance-to-set penalties convert problems cast as constrained optimization problems to more tractable unconstrained forms, and are more flexible than many existing algebraic and regularization penalties. We will see that they often avoid drawbacks that arise from popular alternatives such as shrinkage. We discuss a general strategy for eliciting effective algorithms in this framework using majorization-minimization (MM), a principle that transfers difficult problems onto a sequence of more manageable subproblems through the use of surrogate functions. Methods derived from this perspective feature monotonicity, are often amenable to acceleration, and come with global convergence guarantees. We showcase new progress on classical problems including constrained generalized linear models and sparse covariance estimation using this approach, and discuss recent connections to Bayesian perspectives on constraint relaxation.

In this talk we cover the evolution of modern natural language processing and the role that generative models play in this process. We discuss information decomposition for natural language processing, the concepts of semantic and stylistic information and possible applications of these concepts. We try to illustrate the talk with several case studies from the field of computational creativity and outline possible further development of the field. We also suggest why effective semantic representation could form a key bottleneck to transfer NLP ideas to the field of computational biology.

In this informal session, participants will be invited to share their latest thoughs on linear spaces of symmetric matrices. Emre Sertoez will start with geometry inspired by Question 46.

Linear spaces of symmetric matrices (LSSM) appear naturally in many branches of mathematics. They represent spaces of quadrics, special statistical models, partially symmetric tensors and more. In the first part of the talk we will focus on basic invariants of LSSM, presenting examples and showing possible difficulties. We will motivate introduction of objects known from pure algebraic geometry, such as line bundles, divisors and cohomology rings. Guided by the fact that 'smoothness' and 'properness' are our best friends, we will arrive at the space of complete quadrics. In the second part of the talk we will show how the theory of symmetric functions and cohomology rings can provide answers to very basic questions about LSSM.

Differential Geometry Seminar: Abstract: The Kato condition is a tool from perturbation theory of Dirichlet forms to control perturbed heat semigroups. Using this as a more general condition than Lp-bounds for the negative part of Ricci curvature, I will discuss several recent results in part obtained with Gilles Carron from Nantes, such as Lichnerowicz and isoperimetric constant estimates for compact manifolds as well as a very recent generalization of Myers’ compactness theorem.

The (symmetric) rank of a (symmetric) tensor is the smallest length of an expression of the tensor as a linear combination of (symmetric) decomposable tensors. The border rank of a (symmetric) tensor is the smallest (symmetric) rank of the elements of a one-parameter family of (symmetric) tensors whose limit is equal to the given tensor. Computing rank and border rank of an explicit tensor can be a very difficult task. Upper bounds are often found by providing explicit expressions of the tensor, but lower bounds require theoretical arguments and are usually more difficult to find. In the case of border rank, a lower bound is given by the so-called asymptotic rank of a tensor which measure the growth of the rank when considering tensor-powers of the tensor. This notion, which can be connected to classic works by Strassen, was recently defined in a paper by Christandl, Gesmundo and Jensen.
In this talk, I want to show how algebraic tools from apolarity theory can be used to compute these different notions of rank in the case of symmetric tensors, i.e., homogeneous polynomials. In particular, I will show how to apply these tools in the case of monomials. This talk is based on the recent pre-print arXiv:1907.03487 with Matthias Christandl and Fulvio Gesmundo (U. of Copenaghen).

This talk will be an introduction to dimensionality reduction with random projections, their use in the area of compressed sensing, and connections to deep learning. In the first part, we begin by discussing distance-preserving linear embeddings as in the classical Johnson-Lindenstrauss-Lemma. Then, we introduce the area of compressed sensing (inverse problems with sparsity constraints) and show how it relies on random measurement matrices satisfying the so-called restricted isometry property (RIP) as a sufficient guarantee for sparse recovery. In the second part, we will relate the previously discussed techniques to neural networks, including topics such as random initialization, (sparse) signal recovery using neural networks, compressed sensing using generative models etc.

Recent empirical research has revealed striking regularities in growth that apply from algae to elephants and across levels of organization from embryos to ecosystems. Across these very distinct systems, growth scales with mass raised to a power near ¾, suggestive of a universal dynamical process that is not well understood. I will review the ubiquity of these patterns and highlight their surprising implications in ecological and evolutionary theory. I will also outline some directions we are exploring to understand this pattern, including a simple model that could offer insight into the basic processes generating these scaling laws.

What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine to provide complementary information?
The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions; however, this structure was constructed using a much criticised measure of redundant information. Despite much research effort, no satisfactory replacement measure has been proposed. Pointwise partial information decomposition takes a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from the set of variables. In order to do this, one must overcome the difficulties associated with signed pointwise mutual information. This is done by applying the decomposition separately to the non-negative entropic components of the pointwise mutual information, which are referred to as the specificity and the ambiguity. Then, based upon an operational interpretation of redundancy, measures of redundant specificity and ambiguity are defined. It is shown that the decomposed specificity and ambiguity can be recombined to yield the sought-after partial information decomposition. The decomposition is applied to canonical examples from the literature and its various properties are discussed. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into interpreting the well-known two-bit copy example.

Chemists devote most of their efforts to transforming substances, therefore knowledge about reactants, catalysts, solvents and some other aspects of chemical reactions are of interest. Part of this knowledge is its history that involves determining the aspects that have shaped chemical reactions to their current state. Given the exponential growth of reactions and susbtances, a computational approach to the history of chemical reactions is needed. Here we show the patterns behind the development and use of substances and reaction conditions. We explore a set of more than 45 million chemical reactions and came across with historical patterns for substances, catalysts, solvents, temperatures, pressures and yields of those reactions. These results can be regarded as the background for futher studies on the temporal changes of reaction aspects, e.g. the network structure relating substances with reactions.

In this talk, I will explain why rank-one convexity implies quasiconvexity on the two-by-two upper-triangular matrices. This extends the result on diagonal matrices, proved by Müller in 1999. This is joint work with Bernd Kirchheim and Chun-Chi Lin.

Social dilemmas are situations in which individuals have an incentive to defect at the expense of other group members. Fortunately, when such situations occur repeatedly, reciprocal strategies like Tit-for-Tat can resolve the social dilemma. However, William Press and Freeman Dyson, a computer scientist and a theoretical physicist, have recently shown that repeated interactions also allow individuals to extort their peers. Using an extortionate strategy, an individual can force the co-player to cooperate, although the individual itself is not fully cooperative. In my talk, I will explain how these strategies work, and I will review recent theoretical and experimental results on when extortion pays. In the end of my talk, I will also briefly discuss the impact of a player's memory on the prospects of cooperation in repeated games.

In neuroscience, predictive coding theory (PCT) arguably has become the most comprehensive theory of brain functioning, action and perception. PCT proposes that the brain exploits statistical regularities in its input to facilitate perception. Exploiting regularities is thought to happen (i) either by passing on sensory evidence matching internal predictions (reliability coding), (ii) or by passing on surprising sensory evidence not matching predictions (error coding). It is typically not known when and where in the brain a mismatch between sensory evidence and prediction occurs. Accordingly, when experimentally testing the two strategies, we can not be certain if an observed neural signal reflects a matching prediction or a prediction error. The interpretation of the neural signal thus often depends on the experimenter's a-priori belief which strategy the system uses. What is needed, therefore, is a way of testing the type of coding without relying on the semantics of the neural signals.
In this talk, I will present the framework of local information dynamics as one way to investigate neural coding in a semantics-free fashion. In this framework, local active information storage (LAIS) quantifies the predictable information in a time series, while local transfer entropy (LTE) quantifies the transferred information, for every sample. We applied these measures to existing spike train recordings from 17 pairs of retinal ganglion cells (RGC) and lateral geniculate nucleus (LGN) cells in the anesthetized cat. By evaluating the correlation between LAIS and LTE, we tested whether the synapse preferentially transferred predictable or surprising information, which allowed us to distinguish reliability coding from error coding. For computing the information theoretic measures we used discrete estimators implemented in the JIDT toolbox together with a localized bias correction. We found a positive correlation of LAIS and LTE in all cell pairs, which was stronger for pairs with a strong connection strength (contribution). This suggests that retinal inputs to LGN cells are preferentially passed on when reliable. We believe that our framework will be useful to improve the understanding of PCT and of neural information processing in general.

The talk summarizes results of https://arxiv.org/abs/1701.07805 (joint work with P. Banerjee, E. Olbrich, J. Jost, N. Bertschinger).
We consider the problem of quantifying the information shared by a pair of random variables X1, X2 about another variable S. We propose a new measure of shared information, called extractable shared information, that is left monotonic; that is, the information shared about S is bounded from below by the information shared about f(S) for any function f. We show that our measure leads to a new nonnegative decomposition of the mutual information I(S;X1X2) into shared, complementary and unique components. We study properties of this decomposition and show that a left monotonic shared information is not compatible with a Blackwell interpretation of unique information. We also discuss whether it is possible to have a decomposition in which both shared and unique information are left monotonic.

Recent approaches in applied complexity research use methods from network science, diversity measurement and econometrics to (1) reveal the relatedness between different knowledge fields and to (2) predict the path of knowledge diversification in complex socioeconomic systems (Hidalgo et al., 2007; Guevara et al. 2016ab, Hartmann et al., 2017ab). Here we show two applications of these methods to global maps of science (Guevara et al., 2016b) and labor market dynamics (Hartmann et al. 2017ab). In the first case, we use data from over 12 million publications, 300,000 scholars and 300 scientific fields to evaluate the accuracy of the research space—a new map of science based on co-publications— to predict the knowledge diversification of scientists, universities and countries. In the second case, we make use of an occupational dataset on 40 million Brazilian employees to map the knowledge relatedness between 600 occupations and 670 industries, and use logistic regression to predict the regional diversification dynamics of 558 Brazilian micro-regions. Both applications show that new interdisciplinary methods from complexity research can help to move beyond simplifying equilibrium approaches towards a more scientifically accurate and more practical understanding of economies and societies as complex evolving systems.

Transfer entropy has been used to quantify the directed flow of information between source and target variables in many complex systems. Originally formulated in discrete time, we provide a framework for considering transfer entropy in continuous time systems. By appealing to a measure theoretic formulation we generalise transfer entropy, describing it in terms of Radon-Nikodym derivatives between measures of complete path realisations. The resulting formalism introduces and emphasises the idea that transfer entropy is an expectation of an individually fluctuating quantity along a path, in the same way we consider the expectation of physical quantities such as work and heat. We recognise that transfer entropy is a quantity accumulated over a finite time interval, whilst permitting an associated instantaneous transfer entropy rate. We use this approach to produce an explicit form for the transfer entropy for pure jump processes, and highlight the simplified form in the specific case of point processes (frequently used in neuroscience to model neural spike trains). We contrast our approach with previous attempts to formulate information flow between continuous time point processes within a discrete time framework, which incur issues that our continuous time approach naturally avoids. Finally, we present two synthetic spiking neuron model examples to exhibit the pertinent features of our formalism, namely that the information flow for point processes consists of discontinuous jump contributions (at spikes in the target) interrupting a continuously varying contribution (relating to waiting times between target spikes).

The notion of typical sets in information theory is central to the design of efficient coding schemes for communication. We describe novel conceptual and mathematical links between this core information-theoretic notion with its associated asymptotic properties, and properties of genotypes as long sequences of polymorphic markers sampled from multiple populations. We demonstrate that a population assignment scheme based on set-typicality of genetic sequences, entropy and cross-entropy rates of populations, is theoretically viable, and may be of interest particularly in cases of ’noise’ introduced from small samples.

Understanding the normal and diseased human brain crucially depends on reliable knowledge of its anatomical microstructure and functional micro-organization (e.g., cortical layers and columns of 200-1000µm dimension). Even small changes in this microstructure can cause debilitating diseases. Until now, the microstructure can only be reliably determined using invasive methods, e.g., ex-vivo histology. This limits neuroscience, clinical research and diagnosis.
I will discuss how an interdisciplinary approach developing novel MRI acquisition methods, image processing methods and integrated biophysical models aims to achieve quantitative histological measures of brain tissue, leading to the emerging field of in vivo histology using MRI. In particular, I will present recent methodological advances in quantitative MRI and related biophysical modelling. Examples will include: characterization of cortical myelination and its relation to function; mapping of the axonal g-ratio in a population; changes due to spinal cord injury; age-related brain changes. The presentation will conclude with an outlook on future developments, applications and the potential impact of in-vivo histology using MRI.

I will describe a neural-network architecture that we developed to simulate and explain, at cortical level, word learning and language processes as they are believed to occur in motor and sensory primary, secondary and higher association areas of the left frontal and temporal lobes of the human brain. The model was built to closely reflect known anatomy and neurobiological features of the corresponding cortices, including sparse and patchy connectivity, Hebbian synaptic-plasticity, spontaneous neuronal firing. We simulated early stages of word learning by repeatedly confronting the network with correlated patterns of sensorimotor input; as a result of learning, memory traces for words spontaneously emerged in it, consisting of distributed, strongly connected perception action circuits (or Hebbian “cell assemblies”) that exhibited complex, non-linear dynamics.
In the second part I will attempt to show how the cortical distribution, functional behaviour and competitive interactions of these action-perception circuits, along with the underlying network’s connectivity structure, can go a long way in explaining a body of experimental data and phenomena. By way of example, I will describe the model’s mechanistic accounts of brain indexes of auditory change detection, neurophysiological responses to familiar words and unknown lexical items, the complex interactions of language and attention, and, finally, the emergence and cortical topography of neural processes underlying the spontaneous formation of an intention to speak. I will conclude by arguing for an approach to neuroscience research based on the theory-driven application of experimental methods in conjunction with, and grounded upon, biologically realistic neurocomputational modelling.

Biological networks are a result of evolution and natural selection. Biological network structure, to some extent, reflects historical contingency as evolution proceeds by random tinkering. However, biological networks have evolved to perform specific functions, and these functional constraints may give rise to general design principles. In this talk, I will summarize some design principles of metabolic and gene regulatory networks uncovered by my research.

Cognitive theory has decomposed human mental abilities into cognitive (sub-) systems and cognitive neuroscience succeeded in disclosing a host of relationships between cognitive systems and specific structures of the human brain. However, an explanation of why specific functions are located in specific places had still been missing, along with a neurobiological model that makes concrete the neuronal circuits that carry thoughts and meaning. Brain theory now offers an avenue towards explaining brain-mind relationships and to spell out cognition in terms of neuron circuits in a neuromechanistic sense. Central to this endeavor is the theoretical construct of an elementary functional neuronal unit above the level of individual neurons and below that of whole brain areas and systems: the distributed neuronal assembly (DNA) or thought circuit (TC). I will argued that DNA/TC theory of cognition offers an integrated explanatory perspective on brain mechanisms underlying a range of cognitive processes, including perception, action, language, attention, memory, decision, and conceptual thought (1,2). DNAs are proposed to carry all of these functions and their inner structure (e.g., core and halo subcomponents) along with their functional activation dynamics (e.g., ignition and reverberation processes) explain crucial questions about cortical localization of cognitive function, including the question why memory and decisions draw on frontoparietal ‘multi-demand’ areas although memory formation is normally driven by information in the senses and in the motor system. We suggest that the ability of building DNAs/TCs spread-out over different cortical areas is the key mechanism for a range of specifically human sensorimotor, linguistic and conceptual capacities and that the cell assembly mechanism of overlap reduction is crucial for differentiating a rich vocabulary of actions, symbols and concepts (3).1 Pulvermüller, F. (2013). How neurons make meaning: Brain mechanisms for embodied and abstract-symbolic semantics. Trends in Cognitive Sciences, 17(9), 458-470.2 Pulvermüller, F., & Fadiga, L. (2010). Active perception: Sensorimotor circuits as a cortical basis for language. Nature Reviews Neuroscience, 11(5), 351-360.3 Pulvermüller, F., Garagnani, M., & Wennekers, T. (2014). Thinking in circuits: Towards neurobiological explanation in cognitive neuroscience. Biological Cybernetics, in press.

We discuss a nilsequence version of the classical Wiener-Wintner theorem on convergence of weighted ergodic averages due to Host and Kra and present a uniform version of this result. This is a joint work with Pavel Zorin-Kranich

In this talk, I will discuss newer insights from psychology and the normative belief revision literature with the goal of a better understanding of human inductive reasoning.
I follow Johnson-Laird's definition and call a reasoning process inductive, if information is added in the reasoning process that is not in the factual premises. For instance, reasoning using analogies is inductive.
A motivating example of inductive reasoning is the formation of expectations for novel business opportunities -like Amazon.com in the perspective of 1998, like Google in the perspective of 2004. This example will be used throughout the talk, also, as the explorations are motivated by the attempt to develop an inductive decision theory for economics.
In inductive reasoning processes, inconsistencies can appear and need to be resolved. I will apply insights from cognitive science and psychology to understand how decision-makers resolve such inconsistencies. To come back to the example, for Amazon, some analysts in 1998 argued that its margins should be higher than margins of large stores like Barnes and Noble, as Amazon is a direct seller with a model like Dell computers. Other analysts argued that Amazon has properties like a physical retailer, and large stores like Barnes and Nobel should generate higher margins as they have higher purchasing power. Essentially, in my work I try to understand how decision-makers (should) deal with such inconsistent inductive explanations.
I will start with a (very rough) overview of the normative belief revision literature, which started with Gardenfors' developments in the 1980ies. I will present some elements of the model offered by Hans Rott (2009), as it is compatible with the older developments and offers a notation that is much easier to grasp. I use the normative models of belief revision to have a benchmark, in particular to discuss in how far these models are a better basis for discussing human inductive thought than models of Bayesian updating. Put briefly, Harman (1986) already argued that it is implausible, both psychologically and philosophically, that humans entertain a huge space of consistent conditional probabilities, but these would be required for Bayesian updating.
I will then discuss newer insights from mental model theory (Johnson-Laird) regarding if and how actual humans detect inconsistencies, and how they revise them. In the context of inductive reasoning, the unresolved and interesting question is why expert reasoners give up (or simply forget) one inductive explanation and take on another.
In total, the talk will be rather psychological. It will not have an AI or machine learning focus, but the final goal would be to understand human inductive reasoning. One aim is to work out psychological sources of differences in skill for the formation of expectations for novel opportunities that may serve as sources of advantages in forming expectations, which potentially explain puzzles in economics and management science, like the origins of competitive advantages. Three examples of sources of advantages are: The skills of decision-makers to generalize, to detect and resolve inconsistencies, and to distinguish inductive explanations that have an effect on an important consequence from others.

A network of coupled limit cycle oscillators with delayed interactions is considered. The parameters characterizing the oscillator's frequency and limit cycle are allowed to self-adapt. Adaptation is due to time-delayed state variables that mutually interact via a network. The self-adaptive mechanisms ultimately drive all coupled oscillators to a consensual cyclo-stationary state where the values of the parameters are identical for all local systems. They are analytically expressible. The interplay between the spectral properties of the coupling matrix and the time-delays determine the conditions for which convergence towards a consensual state takes place. Once reached, this consensual state subsists even if interactions are removed. In our class of models, the consensual values of the parameters neither depend on the delays nor on the network's topology.

We introduce novel polynomial deformations of the 3-dimensional $A_1$ algebras, which give rise to an algebraization of a very general Hamiltonian of interest in atomic, molecular, nuclear and optical physics. We construct the unitary representations and the corresponding differential operator realizations of the polynomial algebras. This enables us to transform the Hamiltonian into a higher order differential operator which is quasi-exactly solvable. We solve the Hamiltonian differential equation by the functional Bethe ansatz, thus obtaing the exact solutions of the general Hamiltonian. This includes as special cases solutions of many interesting models such as the Bose-Einstein condensate models, the Lipkin-Meshkov-Glick model and the Tavis-Cummings model.

To understand the brain means to reconstruct the mutual influences its parts exert on each other. These influences may rely on a large number of different neural interaction mechanisms - many of which are nonlinear in nature. Hence, estimators for neural interactions that are free of an explicit interaction model promise to give a more comprehensive overview of all interactions in a network. A suitable metric of this kind is transfer entropy, an information-theoretic implementation of Wiener's principle of causality.
While this measure is a straightforward translation of Wiener's principle conceptually, several practical challenges have to be met when applying it to neural data, for example the handling of non-negligible interaction delays, difficulties in state space reconstruction and in embedding parameter estimation from noisy data.
More fundamental problems relate to non-stationaries in the data. Simulation results show that for data based on repeating segments ('trials') the number of data points that are fed into the estimator per segment can be reduced (to approach stationary pieces of data) if the number of segments is increased and a statistical comparison against surrogate data is used. Whether these results pertain only to specific simulations or represent a general principle is unknown.
Another fundamental problem is the estimation of the patterns of information flow for multivariate data. While multivariate transfer entropy can in principle be used to isolate direct influences, in practical settings finite data size usually prevents this of use of the estimator and conservative selection strategies for the necessary parameters are unknown. We suggest an approximate solution for this problem based on the potential of modified transfer entropy estimators to reconstruct the time lag of the interaction.
All influence measures based on Wiener's principle also show a false positive bias in the presence of crosstalk between source and target signals and the additional presence of unequal noise profiles. A heuristic approach to test for the presence of this bias based on time shifting data to transform instantaneous influences to time-lagged influences will be presented.
This talk aims at stimulating the discussion about future improvements in the use of information theoretic influence measures in neuroscience.

RBNs have been used as models of gene regulatory networks. In this talk, we study the order-chaos phase transition in these networks. In particular, we seek to characterise the phase diagram in information-theoretic terms, focusing on the effect of the control parameters (activity level and connectivity). Fisher information, which measures how much system dynamics can reveal about the control parameters, offers a natural interpretation of the phase diagram in RBNs. We report that this measure is maximised near the order-chaos phase transition in RBNs, since this is the region where the system is most sensitive to its parameters.

We first give a sharp lower bounded estimates for the Gaussian curvature of the level sets for the harmonic function on convex rings. Then we study the relation between the Gaussian curvature and the height of the harmonic function. At last we report our recently work on the convexity works on parabolic equation.

The talk summarizes three `cases studies'. The first concerns economic forecasting where we establish the primacy of location shifts in forecast failure. Equilibrium-correction models (EqCMs) face serious forecasting problems, but mechanistic corrections help compared to just retaining a pre-break estimated model, although an estimated model of the break process could outperform. This sets the scene for the rest of the talk. The second concerns model selection. Economies are high dimensional, and forecast failure reveals non-constancy, so many features of models cannot be derived by prior reasoning, intrinsically involving empirical discovery and theory evaluation. Fitting a pre-specified model limits discovery, but automatic methods can formulate much more general models with many variables, long lag lengths and non-linearities, allowing for outliers, data contamination, and parameter shifts; select congruent parsimonious-encompassing models even with more candidate variables than observations, while embedding the relevant theory; then rigorously evaluate selected models to ascertain their viability. The third concerns inter-temporal optimization and the formation of `rational expectations', where misleading results follow from present approaches applied to realistic economies.

Most ancient cities in the Mediterranean show little above ground to testimony hundreds or even a thousand years of town life. It would take centuries to excavate even small parts of them, and today heritage legislation prohibits such intervention unless the city is at risk of modern destruction. As a result of these factors archaeologists have been developing methods to reconstruct ancient settlements based on surface exploration. Systematic surface surveys allow field archaeologists to collect a huge quantity of data in a relatively short amount of time. To process these data, however, is a time consuming procedure. Also, to organise them in a coherent and meaningful form that can be presented both to specialists and to the general public is a challenging task. This presentation will deal with these topics and will introduce the applications that CEEDS is exploring to better handle large archaeological datasets and to visualise them in innovative ways.

This talk covers our most recent results on coupled linear systems with multiple delays. In particular, we discuss the state-of-the-art in extracting stability maps in delay parameter space, and how the limitations in the existing studies can be circumvented with our new method called Advanced Clustering with Frequency Sweeping ACFS (IEEE TAC Feb 2011). This method is a non-trivial cross-fertilization of root clustering paradigms and frequency sweeping techniques, and allows us taking cross-sectional views of high dimensional stability maps. The proposed method is based on algebra, is computationally efficient, and satisfies the necessary and sufficient conditions of stability.
In ACFS, we reveal that the upper and lower bounds of the frequency parameter we sweep can actually be calculated non-conservatively. This new result therefore allows frequency sweeping techniques to sweep the frequency only within these bounds, bringing simplification to existing practice. We then convert the bound calculations to testing delay-independent stability. By using algebraic geometry, we build a mathematical approach that can test if a linear system with multiple delays is delay-independent stable. This becomes possible primarily by identifying whether or not the frequency upper/lower bounds ever exist. Although the existence of such bounds needs to be confirmed in infinite-dimensional analysis, our developed delay-independent stability test requires us checking the roots of finite number of single-variable polynomials. With such a dramatic simplification, the test becomes tractable, computationally efficient, while still remaining necessary and sufficient in terms of stability (IEEE TAC, accepted).
We next discuss our most recent results on the interplay between network topology, delays, and stability. On a benchmark consensus dynamics, we demonstrate the concepts of Responsible Eigenvalue, which becomes the one and only one eigenvalue of the graph Laplacian determining the delay margin of the entire consensus system, which has otherwise infinite-dimensional stability problem (IET Control Theory & Applications, accepted). With this simplification at hand, we are able to analyze the delay margins of large scale consensus networks, we are able to design the network structure such that the delay margin becomes larger (delay-tolerant topology design), and we are able to construct controllers that tune the responsible eigenvalue in real-time such that the consensus system attains autonomy.

Adaptive (downhill) walks are a computationally convenient way of analyzing the geometric structure of fitness landscapes. Their inherently stochastic nature has limited their mathematical analysis, however. In this talk, a framework that interprets adaptive walks as deterministic trajectories in combinatorial vector fields will be presented.
These combinatorial vector fields are associated with weights that measure their steepness across the landscape.
It will be shown that the combinatorial vector fields and their weights have a product structure that is governed by the neutrality of the landscape. This product structure makes practical computations feasible. The framework presented here also provides an alternative, and mathematically more convenient, way of defining notions of valleys, saddle points, and barriers in landscapes.

We present the key ideas behind the derivation of the generating function of RNA pseudoknot structures. The latter, despite being D-finite is nonrecursive and requires therefore a novel idea -- the reflection principle -- in Weyl chambers. We discuss the combinatorics and analytic combinatorics of canoncial RNA structures and shapes and show that the character of the generating function changes with the number of mutually crossing arcs. Finally we present a linear time algorithm generating RNA pseudoknot structures with uniform probability.

We will analyse the singular points, that is $X^j$ such that $u\notin C^{1,1}(B_r(X^j))$ for any $r>0$, of solutions to $$ \Delta u=-\chi_{\{u>0\}} \textrm{ in }B_1. $$ In $\mathbb{R}3$ we will show that the singular set consists of isolated points and a part that is locally contained in a one dimensional $C1$ manifold. We will also classify the singular points in $\mathbb{R}3$ and show that there are only three kinds of such point and give explicit assymptotics for the solution at such points.

Interaction networks in nature often exhibit highly inhomogeneous architectures. Examples are scale-free degree distributions in protein networks and metabolic networks. Often, the emergence of structural heterogeneity is explained by purely topology-based rules for network evolution, e.g. preferential attachment or node duplications.
Here, we study a different paradigm of network evolution in the context of discrete threshold networks: local, adaptive co-evolution of switching dynamics and interaction wiring close to a critical point. First, the scaling behavior of the critical order-disorder transition for random realizations of threshold networks with heterogeneous thresholds is investigated. It is shown that local correlations between a topological and a dynamical control parameter (in-degree of nodes vs. thresholds) can induce an order-disorder transition.
Second, we show that coupling local adaptations of both control parameters to local measurements of a dynamical order parameter leads to emergence of broad in-degree distributions (approaching a power law in the limit of strong time scale separation between rewiring and threshold changes), and to strong correlations between in-degree and tresholds. In the limit of vanishing probability of threshold adaptations, symmetry breaking between two qualitatively different classes of self-organized networks is observed.
Finally, possible applications to problems in the context of the evolution of gene regulatory networks and development of neuronal networks are discussed.

We study the effect of learning dynamics on the network topology. A network of discrete dynamical systems is considered for this purpose and the coupling strengths are made to evolve according to a temporal learning rule that is based on the paradigm of spike-time-dependent plasticity. This incorporates necessary competition between different edges. The final network we obtain is robust and scale-free.

Exponential families for modeling contingency tables are also known as hierarchical models. In this talk we will discuss a geometric approach to the theory of these statistical models. The key insight for a geometric view is that an exponential familiy is the set of solutions of binomial equations (such as p(00)*p(11) = p(10)*p(01) for independence) and thereby a algebraic variety.

Any three finite sequences (over some alphabet) weighted by, say, 3/7, 2/7, and 2/7 possess a unique sequence z minimizing their weighted Hamming distance to z. This ternary "majority" operation subject to a few equations determines what is called a quasi-median algebra, which in the finite case can be displayed as a quasi-median network. These algebras, first studied in 1980 by Martyn Mulder in a graph-theoretic context and by John Isbell from a geometric point of view, fit into an algebraic scheme for which Heinrich Werner and Brian A. Davey developed natural dualities in a series of papers in 1982-1986. Plosčica then in 1992 concretely established a full duality between quasi-median algebras and a relational ("strong compatibility") structure of partitions of a set, which in a way anticipated some concepts and results of Dress et al. and Bandelt et al. concerning the combinatorial analysis of DNA data. This duality between networks and tables of aligned sequences governs a number of dual pairs of structural features and counting formulae, which can be used, implemented in computer programs, e.g. in the applied context of quality assessment of DNA sequences (- joint work with people from the Institut für Gerichtliche Medizin, Institut für Mathematik, Innsbruck).