Please find more information about the lectures at the detail pages.
For rooms at the MPI MiS please note: Use the entry doors Kreuzstr. 7a (rooms A3 01, A3 02) and Kreustr. 7c (room G3 10), both in the inner court yard, and go to the 3rd. floor. To reach the Leibniz-Saal (E1 05, 1st. floor) and the Leon-Lichtenstein Room (E2 10, 2nd. floor) use the main entry Inselstr. 22.
Please remember: The doors will be opened 15 minutes before the lecture starts and closed after beginning of the lecture!
The course will be a first introduction to Riemann surfaces and algebraic curves. These are beautiful objects which sit at the intersection of algebra, geometry and analysis. Indeed, on one side these are complex manifolds of dimension one, and on the other they are varieties defined as a zero locus of polynomial equations. Furthermore, they are ubiquitous throughout mathematics, from diophantine equations in number theory to water waves in mathematical physics and Teichmüller theory in dynamical systems.
We will aim to cover the theorems of Riemann-Hurwitz and Riemann-Roch, meromorphic functions and their zeroes and poles, plane curves and elliptic curves, abelian integrals, the theorem of Abel-Jacobi and the construction of Jacobian varieties. Time permitting, we might touch upon further topics such as canonical curves, moduli spaces, the Schottky problem and tropical curves.
Description of the first lecture's
28 October 2020: Presentation of the course. Abelian integrals. Manifolds. Holomorphic functions.
4 November 2020:Riemann surfaces. Examples: the projective line, affine and projective plane curves. Holomorphic maps between Riemann surfaces. The degree of a map
11 November 2020: The topology of a Riemann surface, the topological genus, Riemann-Hurwitz formula. Genus of a smooth plane curve. Meromorphic functions, zeroes, poles
25 November 2020: Meromorphic functions as maps to the projective line. Divisors, linear equivalence, ramifification divisors, branch divisors, intersection divisors
2 December 2020: Jacobi theta functions, quasiperiodicity, zeroes. Meromorphic functions on complex toriReferencesWe will not follow exactly any particular book, but the main inspirations for the course will be.
R. Cavalieri and E. Miles, Riemann Surfaces and Algebraic Curves. Cambridge University Press.
W. Fulton, Algebraic Curves. Available online.
F. Kirwan, Complex Algebraic Curves. Cambridge University Press.
R. Miranda, Algebraic Curves and Riemann Surfaces . American Mathematical Society.
There are many other beautiful references for this topic. Some of them are:
E. Arbarello, M. Cornalba, P. Griffiths and J. Harris, Geometry of algebraic curves I. Springer.
O. Forster, Riemannsche Flächen. Springer.
P. Griffiths and J. Harris, Principles of algebraic geometry. Wiley.
J. Jost, Compact Riemann Surfaces. Springer.
Date and time infoWednesday 15:15 - 16:45 (lectures), Wednesday 11:00 - 12:30 (exercise sessions)KeywordsRiemann surfaces, algebraic curvesPrerequisitesabstract algebra and familiarity with differential or algebraic geometry
Statistical learning theory (SLT) is a powerful framework to study the generalisation ability of learning machines (their performance in the context of new data). The corresponding bounds depend on certain capacity measures that quantify the expressivity of the underlying architecture (how many different functions it can represent).
Recent case studies have shown that the high performance of state-of-the-art learning machines, particularly deep neural networks, cannot be explained within the framework of classical SLT. On the one hand, such systems work extremely well in over-parametrised regimes where the classical bounds, being strongly dependent on the number of parameters, become uninformative. On the other hand, some evidence suggests that convergence and generalisation should be controlled by bounds that depend on some norm of the optimal solution, rather than the capacity of the architecture. The control of this norm is believed to be strongly dependent on the optimisation algorithm at hand.
The seminar has been prepared in collaboration with Juan Pablo Vigneaux.
Part I. Introduction: Classical theory of learning and generalisation
A. Statistical learning theory
B. Capacity measures in SLT: VC-dimension, Rademacher dimension, etc.
For A and B, see cf. Bousquet, O., Boucheron, S. and Lugosi, G., 2003, February. Introduction to statistical learning theory. In Summer School on Machine Learning (pp. 169-207). Springer, Berlin, Heidelberg.
C. Optimization: Gradient descent and Stochastic Gradient descent
D. VC-dimension of neural networks
Bartlett, P.L. and Maass, W., 2003. Vapnik-Chervonenkis dimension of neural nets. The handbook of brain theory and neural networks, pp.1188-1192.
Part II. Puzzles and challenges posed by recent case studies
References:
Zhang, C., Bengio, S., Hardt, M., Recht, B. and Vinyals, O., 2016. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530.
Gunasekar, S., Woodworth, B.E., Bhojanapalli, S., Neyshabur, B. and Srebro, N., 2017. Implicit regularization in matrix factorization. In Advances in Neural Information Processing Systems (pp. 6151-6159).
Belkin, M., Hsu, D., Ma, S. and Mandal, S., 2019. Reconciling modern machine-learning practice and the classical bias variance trade-off. Proceedings of the National Academy of Sciences, 116(32), pp.15849-15854.
Complementary references:
Zhang, C., Liao, Q., Rakhlin, A., Miranda, B., Golowich, N. and Poggio, T., 2018. Theory of deep learning IIb: Optimization properties of SGD. arXiv preprint arXiv:1801.02254.
Poggio, T., Kawaguchi, K., Liao, Q., Miranda, B., Rosasco, L., Boix, X., Hidary, J. and Mhaskar, H., 2017. Theory of deep learning III: explaining the non-overfitting puzzle. arXiv preprint arXiv:1
801.00173.
And also the talks by:
Peter Bartlett: Accurate prediction from interpolation
Nati Srebro: Theoretical Perspectives on Deep Learning
Mikhail Belkin: Beyond Empirical Risk Minimization: the lessons of deep learning
Part III. Theoretical perspectives and developments
Bartlett, P.L., 1998. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE transactions on Information Theory, 44(2), pp.525-536.
Bartlett, P.L., Long, P.M., Lugosi, G. and Tsigler, A., 2019. Benign overfitting in linear regression. arXiv preprint arXiv:1906.11300.
Gunasekar, S., Lee, J.D., Soudry, D. and Srebro, N., 2018. Implicit bias of gradient descent on linear convolutional networks. In Advances in Neural Information Processing Systems (pp. 9461-9471).
Date and time infoThursday 15:15 - 16:45PrerequisitesLinear algebra, elementary probability theory, basic notions from functional analysis
This course will introduce into the theory of kernels and associated Hilbert spaces (reproducing kernel Hilbert spaces, RKHS), which play an important role within mathematical learning theory. In Statistical Learning Theory and the theory of Support Vector Machines (SVM), they provide efficient ways to formalise and control the generalisation ability of learning systems, based on the structural risk minimisation principle. A closely related inductive principle comes from regularisation theory. Here, learning is interpreted as an ill-posed inverse problem, where kernels define appropriate regularisers of the problem.Date and time infoThursday 11:15 - 12:45, first meeting on November 19.PrerequisitesLinear algebra, elementary probability theory and functional analysis (the relevant results for this course will be summarised).LanguageEnglish
About this lectureDue to the rather broad spectrum of topics within the IMPRS, the curriculum consists of a core curriculum to be attended by all students and a variety of more specialized lectures and courses. The heart of our teaching program certainly is the Ringvorlesung. Each semester the Ringvorlesung focuses on one field and is usually delivered by scientific members of the IMPRS who introduce different approaches and visions within this field.
Schedule
Noémie Combe: A journey in the world of geometry and algebraDates: 03.11.2020, 10.11.2020, 17.11.2020, 24.11.2020
Jürgen Jost:01.12.2020: Riemannian geometry and theoretical physics
08.12.2020: Information geometry and statistics
15.12.2020: The concept of curvature and network analysis
22.12.2020: Metric geometry and machine learning
Jonas Hirsch: Classical minimal surfaces and their genusDates: 05.01.2021, 19.01.2021, 26.01.2021, 02.02.2021Date and time infoTime: 13.30 - 15.00AudienceIMPRS students (mandatory in their first year), Phd students, postdocs
This is an intensive short course on enumerative geometry. The course covers an introduction to intersection theory, and applies the acquired techniques to some classical problems. We will introduce the basics of intersection theory: Chow ring, Chern classes, and basics of Schubert calculus. The theoretical tools which are developed will be applied to the enumerative geometry of some Grassmannian problem and to the Thom-Porteous formula for the calculation of the degree of determinantal varieties. If time permits, we will draw connections to the representation theory of the general linear group.
Lecture 1, 2: Basics of intersection theory. Chow ring. Grassmannians.
Lecture 3, 4: Chern classes. Schubert calculus. Enumerative problems.
Lecture 5, 6: Thom-Porteous’s Formula. Representation Theory.ReferencesD. Eisenbud, J. Harris 3264 and All That (Cambridge 2016) [main reference; lecture notes adapted from this reference will be provided (in non-final version)]
E. Arbarello, M. Cornalba, P. A. Griffiths, J. Harris Geometry of Algebraic Curves, Vol. I (Springer 1985)
L. Manivel Symmetric Functions, Schubert Polynomials, and Degeneracy Loci (SMF/AMS 1998)
W. Fulton, J. Harris Representation Theory: A First Course (Springer 1991)Date and time infoJanuary 11 / 13 / 15 / 18 / 20 / 22, 2021: 16:30 - 18:30 CETPrerequisitesA first course in algebraic geometry is recommended but not strictly required. Familiarity with the notion of algebraic variety and the Zariski topology in affine and projective space is assumed. Some familiarity with commutative algebra or algebraic topology will be helpful but not necessary.
In this course, we consider equilibrium large deviations around invariant measures and the two-scale problem of fluctuations of stochastic systems for large times and small noise. The lecture will cover selected topics from large deviations theory for SDE and SPDE. We will start with a brief introduction to large deviations theory, recalling basic notions and results, so that no advanced previous knowledge is required.Date and time infoFriday, 16:15-17:45