Randolph E. Bank
Some Algorithmic Aspects of hp-Adaptive Finite Elements
We will discuss our on-going investigation of
hp-adaptive finite elements. We will focus on a posteriori
error estimates based on superconvergent derivative recovery.
Besides providing both global error estimates and local
error indicators, this family of error estimates also provides
information that forms the basis of our hp-adaptive refinement
and coarsening strategies. In particular, these a posteriori error
estimates address in a cost efficient and natural way the critical
issue of deciding between h or p refinement/coarsening. Some
numerical examples will be provided.
Susanne C. Brenner
Finite Element Methods for Fourth Order Elliptic Variational Inequalities
Fourth order elliptic variational inequalities appear in obstacle
problems for Kirchhoff plates and optimal control problems constrained
by second order elliptic partial differential equations. The numerical
analysis of these variational inequalities is more challenging than the
analysis in the second order case because the complementarity forms of
fourth order variational inequalities only exist in a weak sense. In
this talk we will present a new approach to the analysis of finite element
methods for fourth order elliptic variational inequalities that are
applicable to C1 finite element methods, classical nonconforming finite
element methods, and discontinuous Galerkin methods.
On linear and nonlinear elliptic eigenvalue problems
The numerical computation of the eigenvalues and eigenvectors of a self-adjoint operator on an infinite dimensional separable Hilbert space, is a standard problem of numerical analysis and scientific computing, with a wide range of applications in science and engineering. Such problems are encountered in particular in mechanics (vibrations of elastic structures), electromagnetism and acoustics (resonant modes of cavities), and quantum mechanics (bound states of quantum systems).
Galerkin methods provide an efficient way to compute the discrete eigenvalues of a bounded-from-below self-adjoint operator A laying below the bottom of the essential spectrum of A. On the other hand, Galerkin methods may fail to approximate discrete eigenvalues located in spectal
gaps, that is between two points of the essential spectrum. In some cases, the Galerkin method cannot find some of the eigenvalues of A located in spectral gaps (lack of approximation); in other cases, the limit set of the spectrum of the Galerkin approximations of A contains
points which do not belong to the spectrum of A (spectral pollution). Such problems arise in various applications, such as the numerical simulation of photonic crystals, of doped semiconductors, or of heavy atoms with relativistic models.
Quantum physics and chemistry also give rise to a variety of nonlinear elliptic eigenvalue problems, such as the stationary Gross-Pitaevskii equation modeling Bose-Einstein condensates, and the Hartree-Fock and Kohn-Sham models commonly used in electronic structure calculation.
In this talk, I will review various results obtained in collaboration with Rachida Chakir, Virginie Ehrlacher and Yvon Maday, on the numerical analysis of linear and nonlinear elliptic eigenvalue problems encountered in quantum physics and chemistry.
High dimensional sparse polynomial approximation of parametric PDE's
Various mathematical problems are challenged by the fact they involve functions of a very large number of variables. Such problems arise naturally in learning theory, partial differential equations or numerical models depending on parametric or stochastic variables. They typically result in numerical difficulties due to the so-called ''curse of dimensionality''. We shall explain how these difficulties may be handled in the context of stochastic-parametric PDE's based on the concept of sparse polynomial approximation.
Tensors, Sparsity, and High-dimensional PDEs
Problems in high spatial dimensions are typically subject to the "curse of dimensionality" which roughly means that the computational work needed to realize a desired target accuracy increases exponentially in the spatial dimension. As a consequence, on the one hand, standard methods developed for low dimensional regimes become prohibitively expensive. On the other hand, classical regularity notions do no longer provide meaningful complexity criteria. Possible remedies would be unveiling a hidden low dimensionality of the objects of interest, typically expressed in terms of "sparsity". This talk addresses tensor sparsity of solutions to high-dimensional PDEs aiming at approximating a solution within a given accuracy tolerance by possible short expansions in solution dependent tensors.
The two central themes are: (i) under which assumptions on the data are the solutions in a certain sense tensor sparse which can be viewed as a "regularity theorem", (ii) how to stably generate tensor approximations with provably near-minimal ranks. We present first results concerning (i) and (ii)
for a simple model problem in an infinite-dimensional framework drawing on and highlighting, in particular, several pivotal contributions by W. Hackbusch in this field.
Fast algorithms for high frequency wave propagation
Direct numerical approximation of high frequency wave propagation typically requires a very large number of unknowns (N). We will consider fast algorithms for iterative methods applied to boundary integral formulations and to variable coefficient differential equations. For integral formulations we present a multi-level fast multipole method based on directional decomposition, which can be proved to have near optimal order of complexity: O(NlogN). A random sampling algorithm for matrix compression increases the efficiency. In the variable coefficient frequency domain differential equation case we develop new preconditioners based on sweeping processes. Hierarchical matrix techniques for compression or moving perfectly matched layers play important roles in generating algorithms of close to optimal computational complexity.
Distant encounters with Prof. Hackbusch
This is a mathematical talk on how Prof. Hackbusch has repeatedly influenced my research on time-dependent problems through his books and workshops centering on spatial problems.
The talk will concentrate on the numerical mathematics of dynamical problems related to three distant encounters:
1987 Oberwolfach --- multigrid methods
1991 Kiel --- boundary integral equations
2008 Leipzig --- tensor approximation.
A new fast algorithm based on domain decomposition method for the computation of molecular dynamics in a polarizable, continuum solvent
E. Cancès, L. Lagardere, F. Lipparini, Y. Maday, B. Mennucci, J.-P Piquemal, B. Stamm
In this contribution, we shall present an efficient, parallel, linear scaling implementation of the conductor-like screening model (COSMO), with van der Waals molecular cavities and classical charge distributions. The
electrostatic energy contribution to the solvation energy, usually computed by solving an integral equation on the whole surface of the molecular cavity, can actually be computed by using an integral equation formulation of Schwarz’s domain decomposition method for boundary value problems.
We shall present in detail the implementation and its linear scaling properties, both in computational cost and memory requirements. Several numerical examples will also be presented on linear and globular large-sized systems,
for which the calculation of the energy and of the forces is
achieved with timings compatible with the use of polarizable
continuum solvation for molecular dynamics simulations.
Cancès, E., Maday, Y., & Stamm, B. (2013). Domain decomposition for implicit solvation models. The Journal of chemical physics, 139, 054111.
Lipparini, F., Stamm, B., Cancès, E., Maday, Y., & Mennucci, B. (2013). Fast domain decomposition algorithm for continuum solvation models: Energy and first derivatives. Journal of Chemical Theory and Computation, 9(8), 3637-3648.
Recent tensor formats for solving high-dimensional PDEs
Hierarchical Tucker tensor format introduced by Hackbusch et al. including Tensor Trains (TT) (Tyrtyshnikov) have been introduced recently. These representations offer stable and robust approximation of high order tensors and multi-variate functions by a low order cost. For many high dimensional problems, including many body quantum mechanics, uncertainty quantification and Fokker-Planck equation, this approach has a certain potential to circumvent from the curse of dimensionality.
In case = ⊗i=1di complexity is proportional to d and polynomial in some multilinear ranks. The approximation properties w.r.t. to these ranks are depending on bilinear approximation rates and corresponding trace class norms. Despite fundamental problems in multilinear approximation, under certain conditions optimal convergence rates could be shown. In case = ⊗i=1dℂ2, these formats are equivalent to tree tensor networks states and matrix product states (MPS) introduced for the treatment of quantum spin systems. (Under the assumption of moderate ranks, i.e. low entanglement, this approximation enables quantum computing without quantum computers.)
For numerical computations, we consider the solution of quadratic optimization problems constraint by the restriction to discrete tensors of bounded multilinear ranks r. The underlying admissible set is no longer convex, but it is an algebraic variety. Outside singular points, it is an analytic (open) Riemannian manifold of such tensors. We consider optimization on this manifold and corresponding gradient schemes. The loss of convexity is the reason of essential difficulties for proving convergence. However some convergence results could be established by applying Lojasiewicz inequality and related techniques.
Discontinuous Petrov-Galerkin (DPG) methods for convection-diffusion problems
We discuss Petrov-Galerkin discretizations with optimal test spaces, or equivalently, least squares methods, for solving convection-dominated convection diffusion problems. In view of the expected occurrence of layers, we aim at methods that yield near-best approximations for the primal field variable in L2 by discontinuous piecewise polynomials. A necessary condition for obtaining a robust method is that it still applies in the limit of a vanishing diffusion. We compare some existing and new PG formulations in view of both these design principles, and illustrate our findings with some numerical comparisons.
The talk is based on joint work with Dirk Broersen (KdVI, Amsterdam).
Adaptive Finite Element Approximation of a Variational Model of Fracture
The energy of the Francfort-Marigo model of brittle fracture can
be approximated, in the sense of Γ-convergence, by the
Ambrosio-Tortorelli functional. We formulate and analyse an adaptive
finite element algorithm for the computation of its (local) minimisers.
In the algorithm, we combine a Newton-type method with a residual-driven
adaptive mesh refinement. We present two theoretical results which
demonstrate convergence of the algorithm to local minimisers of the
Ambrosio-Tortorelli functional. The talk is based on joint work with
Siobhan Burke (University of Oxford) and Christoph Ortner (University
Groundbreaking Ideas in Numerics and their Effect
Groundbreaking ideas in numerics are connected with great names, such as Newton, Euler, Gauß etc. In numerics of partial differential equations, the major ideas go back to the last two centuries, two of them are central. The first is the finite element method which goes back to K. H. Schellbach, 1851, the second is the iterative solution of large sparse algebraic systems going essentially back to Gauß, 1823.
Numerics were boosted by the introduction of computers and their exploding computational power. Without numerics, however, computers would be mere data shuffling machines, not problem solvers. With increasing computational power, complexity became more and more the dominating issue of numerical computations. After Gauß‘s invention of iterative solvers for linear systems of equations a second idea was necessary to overcome the complexity problem. This second step was the multigrid idea, first described by Brakhage, 1960, and thoroughly understood and mathematically analyzed by Wolfgang Hackbusch in 1976 and the following years.
This multigrid idea can be viewed as a special variant of the multiscale character of the physical world. The analytical investigation thereof has been initiated by Einstein, 1905, and continued by numerous scientists throughout the 20th century to date. The multigrid idea, however, was conceived and introduced completely independent of these physical techniques, merely by studying the properties of the corresponding methods and equations.
Nowadays, computing plays a central and more and more decisive role in industrial development and design. Thus, multigrid solvers are crucial components of most simulation codes used in industrial simulation. Simulation in industry has become a big business, which finally is possible only due to the groundbreaking idea of multigrid methods.
Fast and robust multigrid methods allow a new dimension of numerics for the realistic approximation and computation of Physics. This gives a new boost to applied research, in particular in the life sciences. In the talk, we first introduce the basic ideas, then we present applications and show results from several application areas.
Some multilevel methods for interface problems
This talk is devoted to multilevel techniques for numerical solution of partial different equations that have iscontinuous coefficients. Some algorithms and theories on discretization and solver for static as well as dynamic problems will be presented.
October 28 - 30, 2013
University of Leipzig
Scientific OrganizersJürgen Jost
Max-Planck-Institut für Mathematik in den Naturwissenschaften
Max-Planck-Institut für Mathematik in den Naturwissenschaften
Technische Universität Berlin
Conference emailTo contact the organizers please use this email address.
Administrative ContactKatja Heid