Please find more information about the lectures at the detail pages.

For rooms at the MPI MiS please note: Use the entry doors Kreuzstr. 7a (rooms A3 01, A3 02) and Kreustr. 7c (room G3 10), both in the inner court yard, and go to the 3rd. floor. To reach the Leibniz-Saal (E1 05, 1st. floor) and the Leon-Lichtenstein Room (E2 10, 2nd. floor) use the main entry Inselstr. 22.

Please remember: The doors will be opened 15 minutes before the lecture starts and closed after beginning of the lecture!

In the first part of this course I will introduce basic concepts and notions in discrete geometric analysis. This includes elementary graph theory, Cayley graphs, random walks and the discrete Laplace operator. After that I will discuss spectral properties and geometric bounds for the eigenvalues of the Laplace operator on finite and infinite graphs. If there is interest, I can also mention some applications of the spectral theory in physics and other fields.
In the second part of the course, I want to discuss more advanced topics and some recent developments in the field, including several discrete notions of curvature, and gradient and heat kernel estimates on graphs.Date and time infoFriday 14.00 - 15.30Keywordsdiscrete Laplace operator, graph theory, eigenvalue and heat kernel estimates, gradient estimates on graphs, discrete notions of curvature, random walksPrerequisiteslinear algebra, analysis, knowledge in graph theory, differential geometry and functional analysis is helpful but not necessaryAudienceMSc students, PhD students, PostdocsLanguageEnglish

In continuum mechanics, many models of different accuracy are available for a single physical situation: For example, the behavior of a fluid may be described by particle models treating all molecules individually, by the Boltzmann equation, or by the Navier-Stokes equation. The latter more simple models arise from the former more complex models by formal arguments which rely on additional modeling assumptions.
It is often a challenging task in applied analysis to justify such model simplifications rigorously; likewise, in applications it is often a difficult problem to decide which model simplifications are possible in a given situation without introducing an unacceptably large error in the solution.
In this lecture, we shall present a comparably recent approach to modeling error estimation:
A posteriori modeling error estimates
use the information provided by the solution to the simplified model in order to obtain significantly improved bounds for the model simplification error. In certain situations, they are even the only known way of estimating the modeling error rigorously.The contents of the lecture may include:
A posteriori error estimation strategies for the numerical error for PDEs
A posteriori estimates for the modeling error in periodic homogenization of elliptic PDEs
Bounds on the modeling error for (dimensionally reduced) plate models, as compared to full three-dimensional elasticity
Estimates for the modeling error caused by replacing the compressible Navier-Stokes equation with the incompressible Navier-Stokes equation for a slightly compressible fluid
Date and time infoWednesday 09.15 - 10.45Keywordsmathematical modeling, continuum mechanics, a posteriori modeling error estimates, periodic homogenization, dimension reduction, plate models, singular limits in fluid mechanicsPrerequisitesbasic knowledge of PDEs and Sobolev spacesAudienceMSc students, PhD students, PostdocsLanguageEnglish

Reinforcement Learning is a sub-discipline of machine learning, in which an agent learns from interactions with an environment that provides sparse feedback in form of rewards. The reward encodes what the agent should do, but not how the task should be solved. An example is a dog, which cannot be told directly what it should do. Instead, its behaviour needs to be reinforced through positive and negative feedback. From the perspective of mathematics, reinforcement learning is the problem of finding optimal policies in the context of Markov decision processes.
This lecture introduces the fundamental concepts of reinforcement learning. Programming examples are given, and provided online, whenever they are illustrative. The target audience are students and post-graduates with little or no knowledge about reinforcement learning.ReferencesReinforcement Learning: An Introduction, Sutton & Barto, 1998Markov decision processes, Puterman, 2005Reinforcement Learning | State-of-the-art, Wiering & van Otterlo (Eds.), 2012Date and time infoThursday 14.00 - 15.30KeywordsMDPs, Dynamic Programming, Bellman Equation, Temporal Difference Learning, Monte Carlo Methods, Q-Learning, Bandit Problem, POMDPsAudienceMSc students, PhD students, Postdocs

About this lectureDue to the rather broad spectrum of topics within the IMPRS, the curriculum consists of a core curriculum to be attended by all students and a variety of more specialized lectures and courses. The heart of our teaching program certainly is the Ringvorlesung. Each semester the Ringvorlesung focuses on one field and is usually delivered by scientific members of the IMPRS who introduce different approaches and visions within this field.
Schedule
Lecturer: Stephan Luckhaus
Topic: Symmetric Hyperbolic Problems
Dates: 13.04.2015, 20.04.2015, 27.04.2015, 04.05.2015Lecturer: László SzékelyhidiTopic: Euler and Navier-Stokes EquationsDates: 11.05.2015, 18.05.2015, 08.06.2015, 15.06.2015Lecturer: Stefan Hollands Topic: Time evolution in the Einstein EquationsDates: 22.06.2015, 29.06.2015, 06.07.2015, 13.07.2015
Reference:
Lectures from Luckhaus and Székelyhidi: Fritz John. Partial Differential Equations. Fourth edition, New York: Springer, 1982. DOI: 10.1007/978-1-4615-9979- (available at MPI MIS Library), download the 2nd edition (1975) at Springer
Lectures from Hollands: Hans Ringström. The Cauchy problem in General Relativity. Zürich: European Mathematical Society, 2009. DOI: 10.4171/053 (available at MPI MIS Library), download the e-book PDF at EMS homepageDate and time infoMonday 13.15–14.45PrerequisitesAdvanced calculusAudienceMSc students, PhD students, Postdocs

In this course, I shall discuss various aspects of the notion of curvature as a measure of the deviation of a metric structure from a flat, Euclidean one.
J.Jost, Riemannian Geometry and Geometric Analysis, 6th ed., 2011Further references will be provided during the course.Date and time infoTuesday 13.30 - 15.00KeywordsRiemannian geometry, metric spaces, curvatureAudienceMSc students, PhD studentsLanguageEnglish

This mini-course provides an opportunity to gain experience with the dynamical systems approach to singularly perturbed ODEs so-called slow-fast systems. Multiple timescales are ubiquitous in models of real-world phenomena. For instance, many important biological processes evolve on different time scales and therefore consist of slow and fast components, think of neural and cardiac rhythms.
Differential equations involving variables evolving on widely different time scales yield rich and notoriously hard mathematical questions. The mini-course will present geometric techniques to study singularly perturbed ODEs, i.e., the main concepts from so-called geometric singular perturbation theory (part I) and geometric desingularizion based on the blow-up method (part II). Non-trivial applications arising in cell biology, biochemistry, and neuroscience will be discussed (part III).
Take-home message
Biologist dissect frogs, we will dissect singular perturbation problems
Lots of things to discover - even in fairly simple problems!
Date and time infoMonday 10.15 - 11.45Keywordsslow-fast dynamics, geometric singular perturbation theory, blow-up method, relaxation oscillations, canards, biological oscillatory phenomena, mathematical modellingPrerequisitesbasic ODE'sAudienceMSc students, PhD studentsLanguageEnglish

The aim of these lectures is to prove the following fundamental theorem in Geometric Measure Theory, due to D. Preiss (Ann. of Math. (2) 125 (1987), 537–643).Let μ be a locally finite measure on Rn and k ≥ 0 a real number. Assume that the limit
exists, is finite and nonzero for μ-a.e. x. Then either μ = 0 or k ≤ n is integer. Moreover, in the latter case μ is a k-rectifiable measure, i.e. there exist a measurable function f and a countable familyΓi of k-dimensional Lipschitz submanifolds such that
, for every Borel set A. Here Hk denotes the natural k-dimensional volume measure.KeywordsGeometric Measure Theory, Rectifiable Sets, Tangent Measures, Preiss Theorem Prerequisites: Basic measure theory, Lipschitz functions, Hausdorff measuresAudienceMSc students, PhD students, PostdocsLanguageEnglishRemarks and notesThe main tools and prerequisites will be quickly recalled, without proofs.

We cannot find n orthogonal vectors in Euclidean space unless the dimension is larger than or equal to n. If we are willing to settle for n vectors that are almost orthogonal, however, we can even find those in a Euclidean space of dimension ∼ log(n).This drastic reduction in the dimension of the ambient space is due to a very general phenomenon which occurs in many high-dimensional spaces, and implies that any function with small local variations, is actually almost constant on a very large proportion of the space.The so-called concentration of measure phenomenon is a powerful tool with applications in different fields of mathematics. Many asymptotic results in probability theory, information theory and dynamical systems such as the law of large numbers, the asymptotic equipartition property, the central limit theorem and Birkhoff's theorem are consequences of concentration of measure. We will discuss various methods for deriving concentration properties, discuss examples and applications. We will go over isoperimetric inequalities, geometric and topological aspects, concentration in product spaces, logarithmic Sobolev inequalities, the relation to curvature, transportation costs and relative entropy (Kullback-Leibler divergence).Date and time infoThursday 11.00 - 12.30KeywordsConcentration of measure, asymptotic geometric analysis (highdimensional Banach spaces), Ricci curvaturePrerequisitesMultivariate calculus; probability or measure theoryAudienceDiploma students, MSc students, PhD students, PostdocsLanguageEnglish

We will introduce parabolic differential equations driven by white noise in time. We will be mostly interested in nonlinear parabolic equations with a nonlinearity π in the leading order term and a noise ξ that is white not only in time but also in space. The latter limits the space dimension to one, leading to $$\partial_tu-\partial_x^2\pi(u)=\xi$$ />We are interested in the path-wise regularity of solutions to such equations. In case of our model problem, a scaling argument suggests that the solutions are Hölder continuous (with almost exponent ½ in space and almost exponent ¼ in time). This is also the regularity in the linear case. We shall show that this is indeed true. The argument relies on the following ingredients:
On the stochastic side:
Arguments typical for stochastic differential equation (Martingale arguments) that give second-moment regularity estimates.
Concentration of measure arguments on the level of the space-time white noise (Malliavin derivative) that upgrade the low-moment regularity results to Gaussian moments.
On the deterministic side:The Ḣ-1-contraction principle for nonlinear parabolic equations of the form $$\partial_tu-\partial_x^2\pi(u)=0$$Campanato-type arguments for a Schauder theory for non-constant coefficient parabolic equations of the form $$\partial_tu-\partial_x^2(au)=f$$
Hence despite the specifics of the model problem, the arguments are fairly general and it is thus a good excuse for introducing the above-mentioned concepts.Date and time infoTuesday 09.15 - 10.45Keywordsparabolic differential equations, stochastic differential equation, Concentration of measure, Schauder theoryAudienceMSc students, PhD students, PostdocsLanguageEnglish

Dynamical systems are concerned with evolutionary processes. Some examples of dynamical systems are celestian mechanics and population dynamics. In this course we study basic properties of smooth dynamical systems, mostly related to the longtime behaviour. The course is based only on basic notions and should be understandable for Bachelor students.
The following topics will be covered in the course:
Cascades and flows.
Fixed points, periodic points.
Symbolic dynamics, Smale's horseshoe.
Equivalence relations, conjugacy.
Hyperbolic points and sets.
Stable and unstable manifolds.
Structural Stability and shadowing.
ReferencesPilyugin, Sergei. Spaces of dynamical systems. De Gruyter, Berlin, 2012.Katok, Anatole; Hasselblatt, Boris. Introduction to the modern theory of dynamical systems. Cambridge, 1995.Brin, Michael; Stuck, Garrett. Introduction to dynamical systems. Cambridge, 2002.Date and time infoTuesday 11.15 - 12.45 (lectures), Wednesday 13.15 - 14.45 (exercises)KeywordsDynamical system, fixed point, hyperbolicity, structural stability, invariant manifoldAudienceMSc students, PhD students, PostdocsLanguageEnglish