Waves of all kinds permeate our world. We are surrounded by light (electromagnetic waves), sound (acoustic waves), and mechanical vibrations. Quantum mechanics revealed that, at the atomic level, all matter has a wavelike character. And classical gravitational waves have been very recently detected. At the cutting edge of today’s science, it has become possible to manipulate individual atoms. This provides us with precise measurements of a world that exhibits myriad irregularities — dimensional, structural, orientational, and geometric— simultaneously. For waves, such disorder changes everything. In complex, irregular, or random media, waves frequently exhibit astonishing and mysterious behavior known as ‘localization’. Instead of propagating over extended regions, they remain confined in small portions of the original domain. The Nobel Prize–winning discovery of the Anderson localization in 1958 is only one famous case of this phenomenon. Yet, 60 years later, despite considerable advances in the subject, we still notoriously lack tools to fully understand localization of waves and its consequences. We will discuss modern understanding of the subject, recent results, and the biggest open questions.
The mathematical theory of incompressible fluids, Ladyzhenskaya’s favorite topic, still poses challenges for us today. We briefly review the pioneering work of Leray on weak solutions for the Navier-Stokes equation (NSE), and the fundamental contributions of Ladyzhenskaya on the unique solvability of the NSE and the global attractor problem. We will then present some recent results on the regularity and large time behaviour of solutions for the NSE. Connections between these findings and Kolmogorov’s turbulence theory will be discussed as well.
We will review some of the relations between the topological fundamental group (Riemann-Poincaré), Galois groups, the étale fundamental group (Groehtendieck) and local systems of various kinds.
The question of quantum ergodicity deals with the localization or delocalization properties for eigenfunctions of the laplacian, on (compact) riemannian manifolds. For manifolds with an ergodic geodesic flow, the Shnirelman theorem states that ``most'' eigenfunctions of the laplacian become fully delocalized, in the large eigenvalue limit. For manifolds of negative curvature, the Quantum Unique Ergodicity conjecture asks for a statement valid for all eigenfunctions. I will review related questions and results, and will compare with the case of spheres and flat tori. I will also describe some recent results of delocalization for eigenfunctions on large regular graphs, in particular a result which is a discrete analogue of the Shnirelman theorem.
The goal of this lecture is to present a derivation of the Boltzmann equation starting from the hamiltonian dynamics of particles in the Boltzmann-Grad limit, i.e. when the number of particles N →∞ and their size ϵ → 0 with Nϵ2 = 1. We will especially discuss the origin of irreversibility and the phenomenon of relaxation towards equilibrium, which are apparently paradoxical properties of the limiting dynamics.
Already Riemann contemplated the need for modifying our conventional notions of the metric properties of space on scales which are "immeasurably small", a need that should be driven empirically by new insights gained in physics. Great strides have been made since in understanding the theoretical foundations of the physical world, in the form of special and general relativity, quantum theory and quantum field theory. Taken together they strongly suggest the existence of a theory of quantum gravity, which should provide a consistent and quantitative description of the nature of "quantum spacetime" on ultrashort, Planckian length scales. After decades of research, the problem of finding this theory is still outstanding. I will report on recent, unprecedented progress in a new formulation of quantum gravity, called Causal Dynamical Triangulation. It is based on performing a "sum over histories" by using an intrinsically geometric way of regularizing this quantum superposition in terms of triangulated, piecewise flat spacetimes. In two dimensions, evaluating the sum takes the form of a combinatorial problem, which can be solved explicitly. In the physically relevant case of four spacetime dimensions, nontrivial properties of the sum over spacetimes can be extracted with the help of numerical "experiments", yielding some intriguing results which confirm the highly nonclassical nature of spacetime geometry at the Planck scale, and the emergence from it of classical geometry on large scales.
As shown by Gromov's nonsqueezing theorem, symplectic embedding problems lie at the heart of symplectic topology. The four dimensional problem is rather different from that in higher dimensions. Recently it has become possible to specify exactly when a four dimensional ellipsoid embeds in a ball. The talk explains recent joint work with Schlenk on this question, exploring its unexpected connections to continued fractions, Fibonacci numbers and lattice packing problems.
A new approach to portfolio management will be introduced. Complementing the traditional one, which is based on terminal time criteria, this approach yields investment processes across all times and offers flexibility to address important questions in portfolio management like, among others, investment choice for different benchmarks, market views and time horizons. The investment performance process solves a stochastic partial differential equation. One of the novel elements in this new approach is the performance volatility process. The class of admissible volatilities will be discussed. Results on the solutions of the performance SPDE and the form of optimal investment policies will be presented.
In Analog-to-Digital conversion, continuously varying functions (e.g. the output of a microphone) are transformed into digital sequences from which one then hopes to be able to reconstruct a close approximation to the original function. The functions under consideration are typically band-limited (i.e. their Fourier transform is zero for frequencies higher than some given value, called the bandwidth); such functions are completely determined by samples taken at a rate determined by their bandwidth. These samples then have to be approximated by a finite binary representation. Surprisingly, in many practical applications one does not just replace each sample by a truncated binary expansion; for various technical reasons, it is more attractive to sample much more often and to replace each sample by just 1 or -1, chosen judicously. In this talk, we shall see what the attractions are of this quantization scheme, and discuss several interesting mathematical questions suggested by this approach. This will be a review of work by many others as well as myself. It is also a case study of how continuous interaction with engineers helped to shape and change the problems as we tried to make them more precise.