Problems in high spatial dimensions are typically subject to the "curse of dimensionality" which roughly means that the computational work needed to realize a desired target accuracy increases exponentially in the spatial dimension. As a consequence, on the one hand, standard methods developed for low dimensional regimes become prohibitively expensive. On the other hand, classical regularity notions do no longer provide meaningful complexity criteria. Possible remedies would be unveiling a hidden low dimensionality of the objects of interest, typically expressed in terms of "sparsity". This talk addresses tensor sparsity of solutions to high-dimensional PDEs aiming at approximating a solution within a given accuracy tolerance by possible short expansions in solution dependent tensors.
The two central themes are: (i) under which assumptions on the data are the solutions in a certain sense tensor sparse which can be viewed as a "regularity theorem", (ii) how to stably generate tensor approximations with provably near-minimal ranks. We present first results concerning (i) and (ii) for a simple model problem in an infinite-dimensional framework drawing on and highlighting, in particular, several pivotal contributions by W. Hackbusch in this field.
Hierarchical Tucker tensor format introduced by Hackbusch et al. including Tensor Trains (TT) (Tyrtyshnikov) have been introduced recently. These representations offer stable and robust approximation of high order tensors and multi-variate functions by a low order cost. For many high dimensional problems, including many body quantum mechanics, uncertainty quantification and Fokker-Planck equation, this approach has a certain potential to circumvent from the curse of dimensionality. In case $\mathcal{V}=\bigotimes_{i=1}^d\mathcal{V}_i$ complexity is proportional to $d$ and polynomial in some multilinear ranks. The approximation properties w.r.t. to these ranks are depending on bilinear approximation rates and corresponding trace class norms. Despite fundamental problems in multilinear approximation, under certain conditions optimal convergence rates could be shown. In case $\mathcal{V}=\bigotimes_{i=1}^d\mathbb{C}^2 $, these formats are equivalent to tree tensor networks states and matrix product states (MPS) introduced for the treatment of quantum spin systems. (Under the assumption of moderate ranks, i.e. low entanglement, this approximation enables quantum computing without quantum computers.) For numerical computations, we consider the solution of quadratic optimization problems constraint by the restriction to discrete tensors of bounded multilinear ranks $\mathbf{r} $. The underlying admissible set is no longer convex, but it is an algebraic variety. Outside singular points, it is an analytic (open) Riemannian manifold of such tensors. We consider optimization on this manifold and corresponding gradient schemes. The loss of convexity is the reason of essential difficulties for proving convergence. However some convergence results could be established by applying Lojasiewicz inequality and related techniques.
Various mathematical problems are challenged by the fact they involve functions of a very large number of variables. Such problems arise naturally in learning theory, partial differential equations or numerical models depending on parametric or stochastic variables. They typically result in numerical difficulties due to the so-called ''curse of dimensionality''. We shall explain how these difficulties may be handled in the context of stochastic-parametric PDE's based on the concept of sparse polynomial approximation.
Direct numerical approximation of high frequency wave propagation typically requires a very large number of unknowns (N). We will consider fast algorithms for iterative methods applied to boundary integral formulations and to variable coefficient differential equations. For integral formulations we present a multi-level fast multipole method based on directional decomposition, which can be proved to have near optimal order of complexity: O(NlogN). A random sampling algorithm for matrix compression increases the efficiency. In the variable coefficient frequency domain differential equation case we develop new preconditioners based on sweeping processes. Hierarchical matrix techniques for compression or moving perfectly matched layers play important roles in generating algorithms of close to optimal computational complexity.
This talk is devoted to multilevel techniques for numerical solution of partial different equations that have iscontinuous coefficients. Some algorithms and theories on discretization and solver for static as well as dynamic problems will be presented.
We discuss Petrov-Galerkin discretizations with optimal test spaces, or equivalently, least squares methods, for solving convection-dominated convection diffusion problems. In view of the expected occurrence of layers, we aim at methods that yield near-best approximations for the primal field variable in L2 by discontinuous piecewise polynomials. A necessary condition for obtaining a robust method is that it still applies in the limit of a vanishing diffusion. We compare some existing and new PG formulations in view of both these design principles, and illustrate our findings with some numerical comparisons.
The talk is based on joint work with Dirk Broersen (KdVI, Amsterdam).
Fourth order elliptic variational inequalities appear in obstacle problems for Kirchhoff plates and optimal control problems constrained by second order elliptic partial differential equations. The numerical analysis of these variational inequalities is more challenging than the analysis in the second order case because the complementarity forms of fourth order variational inequalities only exist in a weak sense. In this talk we will present a new approach to the analysis of finite element methods for fourth order elliptic variational inequalities that are applicable to C1 finite element methods, classical nonconforming finite element methods, and discontinuous Galerkin methods.
We will discuss our on-going investigation of hp-adaptive finite elements. We will focus on a posteriori error estimates based on superconvergent derivative recovery. Besides providing both global error estimates and local error indicators, this family of error estimates also provides information that forms the basis of our hp-adaptive refinement and coarsening strategies. In particular, these a posteriori error estimates address in a cost efficient and natural way the critical issue of deciding between h or p refinement/coarsening. Some numerical examples will be provided.
E. Cancès, L. Lagardere, F. Lipparini, Y. Maday, B. Mennucci, J.-P Piquemal, B. Stamm
In this contribution, we shall present an efficient, parallel, linear scaling implementation of the conductor-like screening model (COSMO), with van der Waals molecular cavities and classical charge distributions. The electrostatic energy contribution to the solvation energy, usually computed by solving an integral equation on the whole surface of the molecular cavity, can actually be computed by using an integral equation formulation of Schwarz’s domain decomposition method for boundary value problems.
We shall present in detail the implementation and its linear scaling properties, both in computational cost and memory requirements. Several numerical examples will also be presented on linear and globular large-sized systems, for which the calculation of the energy and of the forces is achieved with timings compatible with the use of polarizable continuum solvation for molecular dynamics simulations.
Cancès, E., Maday, Y., & Stamm, B. (2013). Domain decomposition for implicit solvation models. The Journal of chemical physics, 139, 054111. Lipparini, F., Stamm, B., Cancès, E., Maday, Y., & Mennucci, B. (2013). Fast domain decomposition algorithm for continuum solvation models: Energy and first derivatives. Journal of Chemical Theory and Computation, 9(8), 3637-3648.
Groundbreaking ideas in numerics are connected with great names, such as Newton, Euler, Gauß etc. In numerics of partial differential equations, the major ideas go back to the last two centuries, two of them are central. The first is the finite element method which goes back to K. H. Schellbach, 1851, the second is the iterative solution of large sparse algebraic systems going essentially back to Gauß, 1823.
Numerics were boosted by the introduction of computers and their exploding computational power. Without numerics, however, computers would be mere data shuffling machines, not problem solvers. With increasing computational power, complexity became more and more the dominating issue of numerical computations. After Gauß‘s invention of iterative solvers for linear systems of equations a second idea was necessary to overcome the complexity problem. This second step was the multigrid idea, first described by Brakhage, 1960, and thoroughly understood and mathematically analyzed by Wolfgang Hackbusch in 1976 and the following years.
This multigrid idea can be viewed as a special variant of the multiscale character of the physical world. The analytical investigation thereof has been initiated by Einstein, 1905, and continued by numerous scientists throughout the 20th century to date. The multigrid idea, however, was conceived and introduced completely independent of these physical techniques, merely by studying the properties of the corresponding methods and equations.
Nowadays, computing plays a central and more and more decisive role in industrial development and design. Thus, multigrid solvers are crucial components of most simulation codes used in industrial simulation. Simulation in industry has become a big business, which finally is possible only due to the groundbreaking idea of multigrid methods.
Fast and robust multigrid methods allow a new dimension of numerics for the realistic approximation and computation of Physics. This gives a new boost to applied research, in particular in the life sciences. In the talk, we first introduce the basic ideas, then we present applications and show results from several application areas.
This is a mathematical talk on how Prof. Hackbusch has repeatedly influenced my research on time-dependent problems through his books and workshops centering on spatial problems. The talk will concentrate on the numerical mathematics of dynamical problems related to three distant encounters: 1987 Oberwolfach --- multigrid methods 1991 Kiel --- boundary integral equations 2008 Leipzig --- tensor approximation.
The numerical computation of the eigenvalues and eigenvectors of a self-adjoint operator on an infinite dimensional separable Hilbert space, is a standard problem of numerical analysis and scientific computing, with a wide range of applications in science and engineering. Such problems are encountered in particular in mechanics (vibrations of elastic structures), electromagnetism and acoustics (resonant modes of cavities), and quantum mechanics (bound states of quantum systems).
Galerkin methods provide an efficient way to compute the discrete eigenvalues of a bounded-from-below self-adjoint operator A laying below the bottom of the essential spectrum of A. On the other hand, Galerkin methods may fail to approximate discrete eigenvalues located in spectal gaps, that is between two points of the essential spectrum. In some cases, the Galerkin method cannot find some of the eigenvalues of A located in spectral gaps (lack of approximation); in other cases, the limit set of the spectrum of the Galerkin approximations of A contains points which do not belong to the spectrum of A (spectral pollution). Such problems arise in various applications, such as the numerical simulation of photonic crystals, of doped semiconductors, or of heavy atoms with relativistic models.
Quantum physics and chemistry also give rise to a variety of nonlinear elliptic eigenvalue problems, such as the stationary Gross-Pitaevskii equation modeling Bose-Einstein condensates, and the Hartree-Fock and Kohn-Sham models commonly used in electronic structure calculation.
In this talk, I will review various results obtained in collaboration with Rachida Chakir, Virginie Ehrlacher and Yvon Maday, on the numerical analysis of linear and nonlinear elliptic eigenvalue problems encountered in quantum physics and chemistry.
The energy of the Francfort-Marigo model of brittle fracture can be approximated, in the sense of $\Gamma$-convergence, by the Ambrosio-Tortorelli functional. We formulate and analyse an adaptive finite element algorithm for the computation of its (local) minimisers. In the algorithm, we combine a Newton-type method with a residual-driven adaptive mesh refinement. We present two theoretical results which demonstrate convergence of the algorithm to local minimisers of the Ambrosio-Tortorelli functional. The talk is based on joint work with Siobhan Burke (University of Oxford) and Christoph Ortner (University of Warwick).
Participants
Markus Bachmayr
TU Berlin
Ramy Badr
University of Leipzig
Randolph E. Bank
University of California
Andrew Barker
Max Planck Institute for Dynamics of Complex Technical Systems
Peter Benner
Max Planck Institute for Dynamics of Complex Technical Systems
Steffen Börm
Universität Kiel
Dietrich Braess
Universität Bochum
Susanne C. Brenner
Louisiana State University
Jens Burmeister
Universität Kiel
Eric Cancès
École des Ponts ParisTech
Sambasiva Rao Chinnamsetty
Max Planck Institute for Mathematics in the Sciences
Gennady Chuev
Max Planck Institute for Mathematics in the Sciences
Albert Cohen
Université Pierre et Marie Curie
Wolfgang Dahmen
RWTH Aachen
Sergey Dolgov
Max Planck Institute for Mathematics in the Sciences
Björn Engquist
University of Texas at Austin
Mike Espig
TU Berlin
Antonio Falco
Unversidad CEU Cardenal Herrera
Ivan Gavrilyuk
University of Cooperative Education Eisenach
Isabelle Greff
Universite de Pau
Roland Griesmaier
Universität Leipzig
Stefan Handschuh
Max Planck Institute for Mathematics in the Sciences
Götz Hofmann
FH Flensburg
Maryna Kachanovska
Max Planck Institute for Mathematics in the Sciences
Venera Khoromskaya
Max Planck Institute for Mathematics in the Sciences
Boris Khoromskij
Max Planck Institute for Mathematics in the Sciences
Stefan Kuehn
Max Planck Institute for Mathematics in the Sciences
Peter Kunkel
Universität Leipzig
Jörg Lehnert
Max Planck Institute for Mathematics in the Sciences
Christian Lubich
Universität Tübingen
Hongjun Luo
Max Planck Institute for Mathematics in the Sciences
Yvon Maday
Université Pierre et Marie Curie
Mahboubeh Nadaf
Max Planck Institute for Mathematics in the Sciences
Asieh Parsania
Max Planck Institute for Mathematics in the Sciences
Rajaramesh Rajendiram
Universität Duisburg-Essen
Christian Schindler
Max Planck Institute for Mathematics in the Sciences
Reinhold Schneider
Technische Universität Berlin
Matti Schneider
TU Chemnitz
Corina Simian
Max Planck Institute for Mathematics in the Sciences
Rob Stevenson
University of Amsterdam
Endre Süli
University of Oxford
Alexander Veit
Max Planck Institute for Mathematics in the Sciences
Ivan Vyalov
Max Planck Institute for Mathematics in the Sciences
Gabriel Wittum
Universität Frankfurt
Jinchao Xu
Pennsylvania State University
Harry Yserentant
TU Berlin
Organizers
Jürgen Jost
Max-Planck-Institut für Mathematik in den Naturwissenschaften
Felix Otto
Max-Planck-Institut für Mathematik in den Naturwissenschaften