Conference on Mathematics of Machine Learning

Speakers' biographies

Yasaman Bahri


Google Research

Homepage
 Yasaman Bahri is a Research Scientist at Google Research. She has broad interests within machine learning, with a current focus on the foundations of deep learning. She has contributed to the theory of overparameterized neural networks and statistical mechanics approaches to deep learning. She received her Ph.D. in physics (2017), in the area of theoretical condensed matter, from the University of California, Berkeley. She is a recipient of the 2020 Rising Stars Award in EECS (UC Berkeley) and the NSF Graduate Fellowship.

Pierre Baldi


University of California, Irvine

Homepage
 Pierre Baldi earned MS degrees in Mathematics and Psychology from the University of Paris, and a PhD in Mathematics from the California Institute of Technology. He is currently Distinguished Professor in the Department of Computer Science, Director of the Institute for Genomics and Bioinformatics, and Associate Director of the Center for Machine Learning and Intelligent Systems at the University of California Irvine. The long term focus of his research is on understanding intelligence in brains and machines. He has made several contributions to the theory of deep learning, and developed and applied deep learning methods for problems in the natural sciences such as the detection of exotic particles in physics, the prediction of reactions in chemistry, and the prediction of protein secondary and tertiary structure in biology. He has written five books, including Deep Learning in Science (Cambridge University Press, 2021) and over 300 peer-reviewed articles. He is the recipient of the 1993 Lew Allen Award at JPL, the 2010 E. R. Caianiello Prize for research in machine learning, and a 2014 Google Faculty Research Award. He is and Elected Fellow of the AAAS, AAAI, IEEE, ACM, and ISCB.

Peter Bartlett


University of California at Berkeley

Homepage
 Peter Bartlett is professor of Computer Science and Statistics at the University of California at Berkeley, Associate Director of the Simons Institute for the Theory of Computing, and Director of the Foundations of Data Science Institute. He has had appointments at the Queensland University of Technology, the Australian National University and the University of Queensland. His research interests include machine learning and statistical learning theory, and he is the co-author of the book Neural Network Learning: Theoretical Foundations. He has been Institute of Mathematical Statistics Medallion Lecturer, winner of the Malcolm McIntosh Prize for Physical Scientist of the Year, and Australian Laureate Fellow, and he is a Fellow of the IMS, Fellow of the ACM, and Fellow of the Australian Academy of Science.

Lénaïc Chizat


Université Paris-Saclay

Homepage
 Lénaïc Chizat is a CNRS researcher at Université Paris-Saclay. He obtained his PhD in applied mathematics at Université Paris-Dauphine in 2017. He is broadly interested in the mathematical analysis of data-driven algorithms, especially when infinite dimensional objects come up in the analysis. His current research interests include the theory of optimal transport and the theory of artificial neural networks.

Ha Quang Minh


RIKEN Center for Advanced Intelligence Project (RIKEN-AIP) in Tokyo, Japan

Homepage
 Ha Quang Minh is currently a Unit Leader at the RIKEN Center for Advanced Intelligence Project (RIKEN-AIP) in Tokyo, Japan. He received the Ph.D. degree in mathematics from Brown University, Providence, RI, USA, under the supervision of Steve Smale. Prior to joining RIKEN, he was a researcher in the Department of Pattern Analysis and Computer Vision (PAVIS) with the Istituto Italiano di Tecnologia (IIT), Genova, Italy. Prior to that, he held research positions at the University of Chicago, the University of Vienna, Austria, and Humboldt University of Berlin, Germany. His current research interests include machine learning and computational statistics, applied and computational functional analysis, applied and computational differential geometry, computer vision, and image and signal processing.

Stefanie Jegelka


Massachusetts Institute of Technology

Homepage
 Stefanie Jegelka is an X-Consortium Career Development Associate Professor in the Department of EECS at MIT. She is a member of the Computer Science and AI Lab (CSAIL), the Center for Statistics and an affiliate of IDSS and ORC. Before joining MIT, she was a postdoctoral researcher at UC Berkeley, and obtained her PhD from ETH Zurich and the Max Planck Institute for Intelligent Systems. Stefanie has received a Sloan Research Fellowship, an NSF CAREER Award, a DARPA Young Faculty Award, Google research awards, a Two Sigma faculty research award, the German Pattern Recognition Award and a Best Paper Award at the International Conference for Machine Learning (ICML). Her research interests span the theory and practice of algorithmic machine learning.

Arnulf Jentzen


University of Münster

Homepage
 Since 2019 Arnulf Jentzen (*November 1983) is employed as a full professor for applied mathematics at the University of Muenster. In 2004 he started his undergraduate studies in mathematics (minor field of study: computer science) at Goethe University Frankfurt in Germany, in 2007 he received his diploma degree at this university, and in 2009 he completed his PhD in mathematics at this university. The core research topics of his research group are machine learning approximation algorithms, computational stochastics, numerical analysis for high dimensional partial differential equations (PDEs), stochastic analysis, and computational finance. He is particularly interested in deep learning based algorithms for high dimensional approximation problems and different kinds of differential equations. Currently, his research group consists of one postdoctoral fellow and seven PhD students. At the moment he serves as an associate/division editor for the Annals of Applied Probability (AAP, since 2019), for Communications in Computational Phyiscs (CiCP, since 2021s), for Communications in Mathematical Sciences (CMS, since 2015), for Discrete and Continuous Dynamical Systems Series B (DCDS-B, since 2018), for the Journal of Complexity (JoC, since 2016), for the Journal of Mathematical Analysis and Applications (JMAA, since 2014), for the SIAM / ASA Journal on Uncertainty Quantification (JUQ, since 2020), for the SIAM Journal on Scientific Computing (SISC, since 2020), for the SIAM Journal on Numerical Analysis (SINUM, since 2016), for the Journal Springer Nature Partial Differential Equations and Applications (SN PDE, since 2019), and for the Journal of Applied Mathematics and Physics (ZAMP, since 2016). He has received a series of awards and prizes. In particular, in 2020 he has been awarded the Felix Klein Prize from European Mathematical Society (EMS). Further details on the activies of his research group can be found at the webpage.

Kathlén Kohn


KTH Royal Institute of Technology

Homepage
 Kathlén Kohn is an assistant professor in Mathematics of Data and AI at KTH Royal Institute of Technology since September 2019. She obtained her PhD from the Technical University of Berlin in 2018. Afterwards she was a postdoctoral researcher at the Institute for Computational and Experimental Research in Mathematics (ICERM) at Brown University and at the University of Oslo. Kathlén’s goal is to understand the intrinsic geometric structures behind machine learning and AI systems in general and to provide a rigorous and well-understood theory explaining them. Her areas of expertise are algebraic, differential, and tropical geometry as well as invariant theory.
Kathlén believes in the importance of the interaction between different scientific fields. She enjoys collaborating with scientists within and outside of mathematics to tackle applied problems and to discover interesting questions for pure mathematics motivated by applications. At the IEEE International Conference on Computer Vision (ICCV) 2019 her joint work with computer vision expert Tomas Pajdla (CTU Prague) and the numerical algebraic geometers Anton Leykin and Timothy Duff (both at Georgia Tech) received the Best Student Paper Award.

Věra Kůrková


Czech Academy of Sciences

Homepage
 Věra Kůrková is a senior scientist at the Department of Machine Learning, Institute of Computer Science of the Czech Academy of Sciences. She received PhD. in mathematics from the Charles University, Prague, and DrSc. (Research Prof.) in theoretical computer science from the Czech Academy of Sciences. Her research interests are in mathematical theory of neurocomputing and machine learning. In particular, her work includes analysis of capabilities and limitations of shallow and deep networks, dependence of network complexity on increasing dimensionality of computational tasks, connections between theory of inverse problems and generalization in learning from data, and development of a new branch of nonlinear approximation theory. She presented keynote plenary lectures and tutorials at major conferences of the field (e.g., IJCNN 2019, INNS BDDL 2019, ESCIM 2017, ICANN 2016). In 2010, she obtained the Bolzano medal for her contribution to mathematical sciences from the Czech Academy of Sciences. Since 2008, she has been a member of the Board of the European Neural Network Society (ENNS), in 2017-2019 she served as its president. She is a member of the editorial boards of the journals Neural Networks, Neural Processing Letters, and Applied and Computational Harmonic Analysis and was a guest editor of special issues of Neural Networks and Neurocomputing. She was the general chair of conferences ICANN 2018, ICANN 2008 and ICANNGA 2001, and co-chair or honorary chair of several conferences ICANN and EANN.

Jason Lee


Princeton University

Homepage
 Jason Lee is an assistant professor in Electrical Engineering and Computer Science (courtesy) at Princeton University. Prior to that, he was in the Data Science and Operations department at the University of Southern California and a postdoctoral researcher at UC Berkeley working with Michael I. Jordan. Jason received his PhD at Stanford University advised by Trevor Hastie and Jonathan Taylor. His research interests are in the theory of machine learning, optimization, and statistics. Lately, he has worked on the foundations of deep learning, non-convex optimization algorithm, and reinforcement learning. He has received a Sloan Research Fellowship in 2019, NIPS Best Student Paper Award and Finalist for the Best Paper Prize for Young Researchers in Continuous Optimization.

Stanley Osher


University of California, Los Angeles

Homepage
 Stan Osher received his PhD in Mathematics from New York University in 1966. He has been at UCLA since 1976. He now is a Professor of Mathematics, Computer Science, Electrical Engineering and Chemical and Biomolecular Engineering .He has been elected to > the US National Academy of Science, the US National Academy of Engineering and the American Academy of Arts and Sciences. He was awarded the SIAM Pioneer Prize at the 2003 ICIAM conference and the Ralph E. Kleinman Prize in 2005. He was awarded honorary > doctoral degrees by ENS Cachan, France, in 2006 and by Hong Kong Baptist University in 2009. He is a SIAM and AMS Fellow. He gave a one hour plenary address at the 2010 International Conference of Mathematicians. He also gave the John von Neumann Lecture at the SIAM 2013 annual meeting. He is a Thomson-Reuters/ Clarivate highly cited researcher-among the top 1% from 2002-present in both Mathematics and Computer Science with an h index of 120. In 2014 he received the Carl Friedrich Gauss Prize from the International Mathematics Union-this is regarded as the highest prize in applied mathematics. In 2016 he received the William Benter Prize. His current interests involve data science, which includes optimization, image processing, compressed sensing , machine learning, neural nets and applications of these techniques.

Jeffrey Pennington


Google Brain

Homepage
 Jeffrey Pennington is a Staff Research Scientist at Google Brain. Prior to this, he was a postdoctoral fellow at Stanford University, as a member of the Stanford Artificial Intelligence Laboratory in the Natural Language Processing (NLP) group. He received his Ph.D. in theoretical particle physics from Stanford University while working at the SLAC National Accelerator Laboratory. Jeffrey’s research interests are multidisciplinary, ranging from the development of calculational techniques in perturbative quantum field theory to the vector representation of words and phrases in NLP to the theoretical analysis of wide neural networks and related kernel methods. Recently, his work has focused on deep learning in the high-dimensional regime.

Stefano Soatto


University of California, Los Angeles

Homepage
 Professor Soatto received his Ph.D. in Control and Dynamical Systems from the California Institute of Technology in 1996; he joined UCLA in 2000 after being Assistant and then Associate Professor of Electrical and Biomedical Engineering at Washington University, and Research Associate in Applied Sciences at Harvard University. Between 1995 and 1998 he was also Ricercatore in the Department of Mathematics and Computer Science at the University of Udine - Italy. He received his D.Ing. degree (highest honors) from the University of Padova- Italy in 1992.
His general research interests are in Computer Vision and Nonlinear Estimation and Control Theory. In particular, he is interested in ways for computers to use sensory information (e.g. vision, sound, touch) to interact with humans and the environment.
Dr. Soatto is the recipient of the David Marr Prize (with Y. Ma, J. Kosecka and S. Sastry of U.C. Berkeley) for work on Euclidean reconstruction and reprojection up to subgroups. He also received the Siemens Prize with the Outstanding Paper Award from the IEEE Computer Society for his work on optimal structure from motion (with R. Brockett of Harvard). He received the National Science Foundation Career Award and the Okawa Foundation Grant. He is Associate Editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) and a Member of the Editorial Board of the International Journal of Computer Vision (IJCV) and Foundations and Trends in Computer Graphics and Vision.

Lenka Zdeborová


École Polytechnique Fédérale de Lausanne

Homepage
 Lenka Zdeborová is a Professor of Physics and of Computer Science in École Polytechnique Fédérale de Lausanne where the leads the Statistical Physics of Computation Laboratory. She received a PhD in physics from University Paris-Sud and from Charles University in Prague in 2008. She spent two years in the Los Alamos National Laboratory as the Director's Postdoctoral Fellow. Between 2010 and 2020 she was a researcher at CNRS working in the Institute of Theoretical Physics in CEA Saclay, France. In 2014, she was awarded the CNRS bronze medal, in 2016 Philippe Meyer prize in theoretical physics and an ERC Starting Grant, in 2018 the Irène Joliot-Curie prize, in 2021 the Gibbs lectureship of AMS. She is an editorial board member for Journal of Physics A, Physical Review E, Physical Review X, SIMODS, Machine Learning: Science and Technology, and Information and Inference. Lenka's expertise is in applications of concepts from statistical physics, such as advanced mean field methods, replica method and related message-passing algorithms, to problems in machine learning, signal processing, inference and optimization. She enjoys erasing the boundaries between theoretical physics, mathematics and computer science.

Date and Location

August 04 - 07, 2021 (previously planned for February 22 - 25, 2021)
Center for Interdisciplinary Research (ZiF), Bielefeld University
Methoden 1
33615 Bielefeld

Scientific Organizers

Benjamin Gess, MPI for Mathematics in the Sciences & Universität Bielefeld

Guido Montúfar, MPI for Mathematics in the Sciences & UCLA

Nihat Ay
Hamburg University of Technology
Institute of Data Science Foundations

Administrative Contact

Antje Vandenberg
MPI for Mathematics in the Sciences
Contact by Email

11.05.2021, 01:27