After discretization of stochastic PDEs (PDEs with uncertain coefficients) one obtains a high dimensional problem. Usual methods have very high exponential cost and do not work. Therefore we apply low-rank tensor methods to solve the problem and to perform post processing (computing different statistics) in an efficient way with a low computational cost. Some open problems will be highlighted at the end of the talk.
Integral type inequalities of functional analysis (e.g., Friedrichs, Poincare, trace, LBB inequalities) play an important role in quantitative analysis of PDEs. These constants arise in interpolation type estimates, a posteriori estimates, stability conditions, etc. In the talk we discuss a new method recently suggested for deriving guaranteed bounds of the Friedrichs and Poincare constants in arbitrary polygonal domains. The method is based on ideas of domain decomposition with overlapping subdomains. Numerical tests confirm theoretical results.
Another part of the talk is devoted to estimates of constants in new Poincare type estimates for functions with zero mean boundary traces on a part of the boundary (or on whole boundary). Estimates of the corresponding constants are obtained analytically and by using affine transformations they are extended to a wide collection of basic polygonal domains. Possible applications to mixed type approximations and a posteriori and error estimation methods are discussed.
The problem of determining the manner in which an incoming acoustic wave is scattered by an elastic body immersed in a uid is one of central importance in detecting and identifying submerged objects. The problem is generally referred to as a uid-structure interaction and is mathematically formulated as a time-dependent transmission problem. In this paper, we consider a typical uid-structure interaction problem by using a coupling procedure which reduces the problem to a nonlocal initial-boundary problem in the elastic body with integral equations on the interface between the domains occupied by the elastic body and the uid. We analyze this nonlocal problem by the Lubich approach via the Laplace transform with an essential feature of which is that it works directly on data in the time domain instead in the transformed domain . Our results may be served as a mathematical foundation for treating time dependent uid-structure interaction problems by convolution quadrature coupling of FEM and BEM.
We consider a cross interpolation of high-dimensional arrays in the tensor train format. We prove that the maximum-volume choice of the interpolation sets provides the quasioptimal interpolation accuracy, that differs from the best possible accuracy by the factor which does not grow exponentially with dimension. For nested interpolation sets we prove the interpolation property and propose greedy cross interpolation algorithms. We justify the theoretical results and test the speed and accuracy of the proposed algorithm with convincing numerical experiments. These results generalize classical results on the interpolation of matrices to the tensor case.
I will explain a geometer's perspective on tensor network states, and their relation to questions in theoretical computer science. No prior knowledge of tensor network states will be needed.
One of the main themes of inverse scattering theory for time-harmonic acoustic or electromagnetic waves is to determine information on unknown objects from measurements of scattered waves away from these objects. These inverse problems are non-linear and severely ill-posed.
Besides standard regularization methods, which are typically iterative, a completely different methodology - so-called qualitative reconstruction methods - has attracted a lot of interest recently. These algorithms recover specific qualitative properties of scattering objects in a reliable and fast way. They avoid the simulation of forward models and need no a priori information on physical or topological properties of the unknown objects to be reconstructed. One of the drawbacks of currently available qualitative reconstruction methods is the large amount of data required by most of these algorithms. It is usually assumed that measurement data of waves scattered by the unknown objects corresponding to infinitely many primary waves are given - at least theoretically.
We consider the inverse source problem for the Helmholtz equation as a means to provide a qualitative inversion algorithm for inverse scattering problems for acoustic or electromagnetic waves with a single excitation only. Probing an ensemble of obstacles by just one primary wave at a fixed frequency and measuring the far field of the corresponding scattered wave, the inverse scattering problem that we are interested in consists in reconstructing the support of the scatterers. To this end we rewrite the scattering problem as a source problem and apply two recently developed algorithms - the inverse Radon approximation and the convex scattering support - to recover information on the support of the corresponding source. The first method builds upon a windowed Fourier transform of the far field data followed by a filtered backprojection, and although this procedure yields a rather blurry reconstruction, it can be applied to identify the number and the positions of well separated source components. This information is then utilized to split the far field into individual far field patterns radiated by each of the well separated source components using a Galerkin scheme, and to compute the associated convex scattering supports as a reconstruction of the individual scatterers. We discuss this algorithm and present numerical results.
Multivariate integration is one of the prime examples for complexity studies of high dimensional problems. In this talk we review more or less well-known results for deterministic and randomized algorithms, present some new approaches and look at related open problems. We discuss in some detail positive results on the power of randomized algorithms, in particular importance sampling. An abstract but nonconstructive approach gives a rather general tractability theorem for integration of functions from reproducing kernel Hilbert spaces. We exhibit cases for which the sampling density for the algorithm can be given explicitly based on certain Sobolev type inequalities. We also discuss negative results in the deterministic setting such as the curse of dimensionality even for some small classes of smooth functions.
Compressive sensing is a recent field of mathematical signal processing which predicts that accurate signal and image recovery from seemingly incomplete measured data is possible. The key ingredient consists in the empirical observation that many types of signals can be well-represented by a sparse expansion in a suitable basis. Efficient recovery algorithms include l1-minimization and greedy algorithms are available.
Remarkably, all provably (near-)optimal measurement matrices modelling the linear data acquisition process know so far are based on randomness. While Gaussian and Bernoulli random matrices indeed provide optimal guarantees on the minimal amount of measurement required for exact and approximate sparse recovery, they are of limited use for practical purposes due to the lack of structure. Therefore, several structured random matrices have been studied in the context of compressive sensing, including random partial Fourier modelling sparse recovery from randomly selected Fourier coefficients and partial random circulant matrices connected to subsampled random convolutions.
Compressive sensing is motivated by many applications including magnetic resonance imaging, radar, wireless communications, coded aperture imaging, analog to digital conversion, astronomical signal processing, sparse regression problems, numerical solution of (parametric) differential equations, function recovery in high dimensions and more.
The talk will give a basic introduction to compressive sensing with an emphasis on the work of the speaker.
We consider linear systems and eigenvalue problems depending on one or several parameters, as they arise from the discretization of parameter-dependent partial differential equations or in eigenvalue optimization problems. There exist a number of established techniques in numerical linear algebra for dealing with parameter-dependent problems, such as subspace recycling. However, these techniques do not fully benefit from the fact that the solution to such problems usually depends smoothly on the parameter(s). We show how low-rank matrix and tensor techniques can be used to implicitly benefit from smoothness and reduce the computational effort, sometimes drastically. This is based on joint work with Christine Tobler and Bart Vandereycken.
Tensor methods are efficient in high-dimensional problems, but usually they are not well-suited for low-dimensional case. For example, there are explicit formulae for the inverse of the Laplace operator, but in two-dimensional case they outperform the simplest FFT-scheme only for very fine grids, which may not be required. In this talk, I will present a new approach for the approximate inversion of the Laplace operator in the low-rank format. Application to the Stokes problem will be considered.
We will present two numerical techniques to deal with high-dimensional problems in stochastic modelling. The first one is motivated by micro-macro problems, where one typically has to compute many times averages by Monte Carlo methods, these averages being parameterized by a parameter. We will propose a reduced basis technique to perform these computations efficiently. The second problem is related to uncertainty quantification. In this context, one wants to understand how some randomness on the parameters of a deterministic problem propagates to the output. A greedy algorithm may be used to obtain an approximation of the so-called response function (typically a high-dimensional function), which associates to the values of the parameters, the ouput quantities of interest.
In our talk, we will present a generalized convolution quadrature for solving linear parabolic and hyperbolic evolution equations. The original convolution quadrature method by Lubich works very nicely for equidistant time steps while the generalization of the method and its analysis to non-uniform time stepping is by no means obvious. We will introduce the generalized convolution quadrature allowing for variable time steps and develop a theory for its error analysis. This method opens the door for further development towards adaptive time stepping for evolution equations. As the main application of our new theory we will consider the wave equation in exterior domain which are formulated as retarded boundary integral equations.
We propose an effective and flexible way to assemble finite element stiffness and mass matrices in MATLAB. The major loops in the code have been vectorized using the so called array operation in MATLAB, and no low level languages like the C or Fortran has been used for the purpose. The implementation is based on having the vectorization part separated, in other words hidden, from the original code thereby preserving its original structure, and its flexibility as a finite element code. The code is fast and scalable. This is a joint work with T. Rahman (Bergen).\\ Report available: www.mis.mpg.de/publications/other-series/tr/report-1111.html
The classical Johnson-Lindenstrauss lemma may be formulated as follows. Let $\varepsilon\in(0,\frac 12)$ and let $x_1,\dots,x_n\in \mathbb{R}^d$ be arbitrary points. Let $k=O(\varepsilon^{-2}\log n)$ be a natural number. Then there exists a (linear) mapping $f:\mathbb{R}^d\to \mathbb{R}^k$ such that $$ (1-\varepsilon)||x_i-x_j||_22\le ||f(x_i)-f(x_j)||_22\le (1+\varepsilon)||x_i-x_j||_22 $$ for all $i,j\in\{1,\dots,n\}.$ Here $||\cdot||_2$ stands for the Euclidean norm in $\mathbb{R}^d$ or $\mathbb {R}^k$, respectively. If $k
Many applications of quantum information rely on the possibility of several systems to be entangled. Thus, the qualification and quantification of entanglement is one of the central topics within quantum information. Due to the exponential growths of the dimension of the state--space (as a function of the number of considered systems), however, many very fundamental questions in this context are still unanswered.
In this talk, I will discuss two different approaches to tackle this important problem. In the first approach certain classes of multipartite states are considered and the entanglement properties and the applications of those states are studied. The second approach to gain insight into the entanglement properties of multipartite states is to investigate which states can be transformed into each other by local operations. One particularly interesting case, which I will focus on in this talk, is the local unitary equivalence of multipartite states. I will present necessary and sufficient conditions for local unitary equivalence of arbitrary pure multipartite states. Since two states which are local unitary equivalent are equally useful for any kind of application and they posses precisely the same amount of entanglement, these investigations lead to a new insight into general problem of characterizing the different types of entangled quantum states.
Joint work with Michael Holst and Ryan Szypowski (both from the University of California, San Diego) We propose and offer effectivity analysis of an a posteriori error estimate for tetrahedral linear Lagrange finite elements. The error estimate is based on the (provably inexpensive) computation of an approximate error function in an auxiliary space. We will provide an equivalence (up to oscillation terms) theorem of the true and approximate $H1$-error, which applies for piecewise smooth coefficients. In particular, our analysis applies to cases in which convection is present (non-coercive problems are fine) which distinguishes it from previous attempts in this direction. Numerical experiments demonstrate the robustness of the estimator with respect to: low regularity of the solution, discontinuities or anisotropies in the coefficients of the differential operator, and dominant convection. The nature of analysis suggests a broader applicability of the general error estimation approach---for example, to higher-order Lagrange elements or other types of elements. Finally, we will discuss a variety of applications of the approximate error function and of the general methodology, such as: functional error estimation, error estimation in other norms, error estimation and convergence of acceleration for eigenvalue problems, and selection of auxiliary spaces in which to compute approximate error functions for different finite elements. Some aspects have been proven, and others are work in progress. The numerical experiments presented in this part of the talk will be for problems in $\mathbb{R}2$.
Large dynamical systems are often obtained as discretizations of parabolic PDEs with nonlinear elliptic parts, either equations or system of order 2 vor 2m, m>1. Space and time discretization methods, so called full discretizations, are necessary to determine the dynamics on center manifolds. We proved for the first time that, allowing stable, and center manifolds, for the standard space discretization methods, e.g. the standard methods used in nonlinear elliptic PDEs. the space discrete center manifolds converge to the original center manifolds: The coefficients of the Taylor expansion of a discrete center manifold and its normal form converge to those of the original center manifold. Then standard, e.g., Runge-Kutta, or geometric time discretization methods can be applied to the discrete center manifold system of small dimension of ordinary differential equations.
These results are applied to near-onset convection patterns in the spherical Benard problem in the earth mantle.This problem is 5-determined, so we need the center manifold, instead of a Liapunov-Schmidt technique. The numerical method has to inherit the equivariance, so that of the spherical harmonics. We use a Chebyshev collocation spectral method, and instead of the exact we obtain an approximate discrete normal form.
To approximate convolutions which occur in evolution equations with memory terms, a variable-step-size algorithm is presented for which advancing $N$ steps requires only $O(N\log N)$ operations and $O(\log N)$ active memory, in place of $O(N2)$ operations and $O(N)$ memory for a direct implementation. A basic feature of the fast algorithm is the reduction, via contour integral representations, to differential equations which are solved numerically with adaptive step sizes. Rather than the kernel itself, its Laplace transform is used in the algorithm. The algorithm is illustrated on three examples: a blowup example originating from a Schrödinger equation with concentrated nonlinearity, chemical reactions with inhibited diffusion, and viscoelasticity with a fractional order constitutive law.
A method to present and analyze protein sequence data as unit count tensors will be presented. The method have the ambition to have the ability to both analyze sequence position and species cluster specific mean property sequences and also analyze species correlations. Position specific evolutionary models including constraints from the funneling model of protein folding will be discussed. The method will be applied to distinguish the cold adapted representatives of the enzyme family elastase.
In the first part, a brief introduction of wavelet theory as well as representation of functions in wavelet and scaling basis will be presented. Then I will explain my ideas about decreasing order for polynomial basis algorithm: goal and construction and I will finish with few numerical results.
After presenting some brief history of domain decomposition and substructuring, we formulate the BDDC method and theoretical condition bounds. The adaptive method adds new coarse degrees of freedom on substructure interfaces as needed to recover good condition bounds, while the multilevel method applies BDDC recursively. The combination is capable of solving large difficult industrial problems in elasticity. Two implementations of BDDC are discussed, by global matrices and on top of the frontal solver.
We develop the non-differentiable embedding theory of differential operators and Lagrangian systems using a new operator on non-differentiable functions. We then construct the corresponding calculus of variations and we derive the associated non-differentiable Euler-Lagrange equation, and apply this formalism to the study of PDE's. We extend the characteristics method to the non-differentiable case. We prove that non-differentiable characteristics for the Navier-Stokes equation correspond to extremals of an explicit non-differentiable Lagrangian system.
In the talk a discontinuous Galerkin (DG) approximation of elliptic problems with discontinuous coefficients will be discussed. The problem is considered in polygonal region $\Omega$ which is a union of disjoint polygonal subregions $\Omega_i$. The discontinuities of the coefficients occur across $\partial\Omega_i$. The problem is approximated by a conforming finite element method (FEM) on matching triangulation in each $\Omega_i$ and nonmatching one across $\partial\Omega_i$. This kind of triangulation and composite discretization are motivated first of all by the regularity of solution of the problem being discussed. The discrete problem is formulated using DG method with interior penalty terms on $\partial\Omega_i$.
Dr. Jonathan Hargreaves is an acoustics researcher from the University of Salford Acoustics Research Centre (ARC) in the UK. His research interests include time-domain Boundary Element Method (BEM), digital signal processing, and auralisation. In July 2007 he completed his PhD entitled "Time Domain Boundary Element Methods for Room Acoustics", and since then has continued his work at Salford as a research and teaching fellow.
Jonathan's contributions to the study of time-domain BEM have primarily comprised the development of novel boundary condition implementations; in particular frequency dependent examples characterised by a surface reflection coefficient. This lecture will focus on detailing these approaches, plus discussion of the merits of utilising surface reflection versus impedance in the time domain.
The current state of UK acoustics research, its key themes, and the attitude and strategic direction of the ARC will then be discussed, with special focus on applications driving numerical modelling. Outline of some key facilities and projects at Salford will be included where relevant. Leading on from these applications, Jonathan will suggest some avenues for future time-domain BEM research which would interest the academic acoustics community and have real-world impact.
The task to solve sparse linear systems of algebraic equations using a fast, computer-resources-efficient and robust computational method arises within various application settings and remains central to many numerical simulations in Scientific Computing. The ever increasing scale of these linear systems entails the necessity to use iterative solution methods and, thus, the need for efficient preconditioners.
Constructing good preconditioners has been a topic of intensive research and implementation over the last thirty years and has still not lost its importance. Despite the significant success in development of general algebraic preconditioning techniques, it still remains true that most efficient preconditioners are those, where in the construction we incorporate more information than what is contained in the system matrix only, i.e. if the preconditioners are more 'problem-aware'.
In the talk, we discuss possibilities to construct efficient two-by-two block factorized preconditioners for general matrices, arising from finite element discretizations of scalar or vector partial differential equations. The finite element method offers the possibility to work on a local, element level, where the corresponding matrices and vectors have small dimensions and even exact matrix computations are feasible to perform. Utilizing the latter we are able to obtain approximations of matrix blocks by assembly of matrices of small dimension, obtained by manipulation of local finite element matrices. In particular, within the construction phase we compute simultaneously sparse approximations of a matrix inverse, a Schur complement matrix and a matrix product.
We show some theoretical estimates for the quality of these approximations. We also illustrate the efficiency of the preconditioner in standard finite element and in adaptive finite element setting on matrices arising from various applications, among which for problems with discontinuous coefficients, convection-diffusion, parabolic problems and phase-field models, such as binary alloy solidification and wetting effects.
We consider approximate computation of a bilinear tensor function $D \approx C = F(A,B)$ of tensors $A$ and $B$. It is assumed that $A, B$ are given in the Tucker format with mode ranks $r$. In this case the same format for $C$ is easily available with mode ranks $r2$. However, $D$ is obtained in the Tucker format with mode ranks as small as possible within a prescribed approximation accuracy. Generally we assume that these ranks are about the same value $r$. Thus, the input and output data contain an array with $r3$ elements. The main difficulty of existing approaches is a call for squared memory of $r6$ elements needed only in between of input and output. We propose how to avoid the need in squared memory and improve the performance of tensor recompression algorithms. The new findings include a variable-rank Tucker-ALS procedure without a priori knowledge of mode ranks and a cheap initialization procedure using an intrinsic tensor structure in $C=F(A,B)$. We also investigate the merits and drawbacks of the Tucker approximation via Krylov subspaces and the pre-filtering of individual mode factors. Numerical examples include tensor operations in the Hartree-Fock and Kohn-Sham models.
Electronic structure calculations, especially those using density-functional theory have provided many insights into various materials properties in the recent decade. However, the computational complexity associated with electronic structure calculations has restricted these investigations to periodic geometries with cell-sizes consisting of few atoms (≈200 atoms). But, material properties are influenced by defects in small concentrations (parts per million). A complete description of such defects must include both the electronic structure of the core at the fine (sub-nanometer) scale and also elastic and electrostatic interactions at the coarse (micrometer and beyond) scale. This in turn requires electronic structure calculations at macroscopic scales, involving millions of atoms, well beyond the current capability.
This talk presents the development of a seamless multi-scale scheme, quasi-continuum orbital-free density-functional theory (QC-OFDFT) to perform electronic structure calculations at macroscopic scales. This multi-scale scheme has enabled for the first time a calculation of the electronic structure of multi-million atom systems using orbital-free density-functional theory, thus, paving the way to an accurate electronic structure study of defects in materials. The key ideas in the development of QC-OFDFT are (i) a real-space variational formulation of orbital-free density-functional-theory, (ii) a nested finite-element discretization of the formulation, and (iii) a systematic means of adaptive coarse-graining retaining full resolution where necessary, and coarsening elsewhere with no patches, assumptions or structure. The accuracy of QC-OFDFT scheme and the physical insights it offers into the behavior of defects in materials are highlighted by the study of vacancies in aluminum.
We present and analyze a new scheme for the approximation of functions of three and four variables by sums of products of univariate functions. The method is based on the Adaptive Cross Approximation (ACA) initially designed for the approximation of bivariate functions. The stability analysis required in the case of four variables will give insight on how to extend the method to functions in high dimensions. Numerical examples validate our estimates.
Solutions of certain PDEs and variational problems may be characterized by "a few significant degrees of freedom", and one may want to take advantage of this feature in order to design efficient numerical solutions. Examples of such situations are ubiquitous: adaptive solution of PDEs, degenerate PDEs for image processing, crack modelling and free-discontinuity problems, viscosity solutions of Hamilton-Jacobi equations, digital signal coding/decoding, and compressed sensing. In the first part of the talk we review the role of variational principles, in particular L1-minimization, as a method for sparsifying solutions in several contexts. Then we address particular applications and numerical methods. We present the analysis of a superlinearly-convergent algorithm for L1 minimization based on an iteratively re-weighted least-squares method.
An analogous algorithm is then applied for the efficient solution of a system of degenerate PDEs for image recolorization in a relevant real-life problem of art restoration. We give a short description of a few advances in the use of this sparsity promoting algorithm for certain learning theory problems. This introduces us to other algorithms for performing efficiently L1 minimization, based on projected gradient methods and subspace-correction/domain-decomposition methods, and their modifications for free-discontinuity problems. The second part of the talk addresses the issue of embedding compressibility in numerical simulation, and in particular the use of adaptive strategies for the solution of elliptic partial differential equations discretized by means of redundant frames. We discuss the construction of wavelet frames on bounded domains and the optimal performances of adaptive solvers in this context. We conclude the talk with a vision of the prospectives in this field and several new open questions.
Problems of scattered data interpolation are investigated as problems in Bayesian statistics. When data are sparse and only available on length scales greater than the correlation length, a statistical approach is preferable to one of numerical analysis. However, when data are sparse, but available on length scales below the correlation length it should be possible to recover techniques motivated by more numerical considerations. A statistical framework, using functional integration methods from statistical physics, is constructed for the problem of scattered data interpolation. The theory is applicable to (i) the problem of scattered data interpolation (ii) the regularisation of inverse problems and (iii) the simulation of natural textures. The approaches of Kriging, Radial Basis Functions and least curvature interpolation are related to a method of 'maximum probability interpolation'. The method of radial basis functions is known to be adjoint to the Universal Kriging method. The correlation functions corresponding to various forms of Tikhonov regularisation are derived and methods for computing some samples from the corresponding probability density functionals are discussed.
The computation of two-electron integrals, i.e., Coulomb integrals and exchange integrals is a major bottleneck in Hartree-Fock, density functional theory and post-Hartree-Fock methods. For large systems, one has to compute a huge number of two-electron integrals for these methods. This leads to very high computational costs. To break this complexity, we need efficient algorithms to handle these two-electron integrals. The representation of product of orbitals in wavelet bases provides an efficient treatment for two-electron integrals. We discuss the efficiency of the algorithm for the product of orbitals in Daubechies wavelet bases and then the computation of two-electron integrals. For this, we use the novel approach of Beylkin which is based on uncoupling the interaction between resolution levels. We provide a detailed procedure and analysis which lead to the further improvements of the algorithm to compute two-electron integrals more efficiently.
Much progress has been made over the last years in the development of nonpolynomial finite element methods for wave problems. The idea of these methods is to use basis functions in each element that are better adapted to the underlying PDE than polynomials. Most implementations of nonpolynomial finite element methods use local plane wave basis sets. But other PDE adapted basis sets are possible, e.g. fundamental solutions or Fourier-Bessel functions. In this talk we compare different basis sets, analyse their stability and convergence properties and show how these results can be used to design an efficient nonpolynomial finite element method for scattering on polygons.
A chemical growth model for binary alloys is presented. At each step, using a simple energetic criterion, an A or a B atom is aggregated to the growing cluster. Two one-dimensional models are considered. The first one is a lattice-gas model with short-range decreasing and convex chemical interactions. It is proved that the growing structures are the so-called uniform structures where the atoms are arranged as regularly as possible and which are known to be the ground state equilibrium structures of the same model. The second model uses an electronic tight-binding hamiltonian for which the growth of uniform structures is also observed. In spite of its original simplicity, this toy model for growth shows unexpected results.
I will present a simple, robust and highly efficient method for optimizing the parameters of many-body wave functions by energy minimization in quantum Monte Carlo calculations. Using a strong zero-variance principle, the optimal parameters are determined by diagonalizing the Hamiltonian matrix in the space spanned by the wave function and its derivatives [1-2]. We apply this method to obtain accurate multideterminant Jastrow-Slater wave functions for atomic and molecular systems, where the Jastrow parameters, the configuration state function coefficients, the orbital coefficients and the basis function exponents are simultaneously optimized. This allows one to reach near chemical accuracy on the dissociation energies of the first-row diatomic homonuclear molecules [3]. If time permits, I will also illustrate the use of these optimized wave functions, together with the construction of improved statistical estimators, to compute observables such as dipole moments and pair densities [4].
[1] J. Toulouse and C. J. Umrigar, J. Chem. Phys. 126, 084102 (2007).[2] C. J. Umrigar, J. Toulouse, C. Filippi, S. Sorella, and R. G. Hennig, Phys. Rev. Lett. 98, 110201 (2007).[3] J. Toulouse, C. J. Umrigar, J. Chem. Phys. 128, 174101 (2008).[4] J. Toulouse, R. Assaraf, C. J. Umrigar, J. Chem. Phys. 126, 244112 (2007).
We describe a novel tensor approximation method for discretised multi-dimensional functions and operators in $\mathbb{R}^d$, based on the idea of multigrid acceleration. The approach stands on successive reiterations of the orthogonal Tucker tensor approximation on a sequence of nested refined grids. On the one hand, it provides a good initial guess for the nonlinear iterations to find the approximating subspaces on finer grids, on the other hand, it allows to transfer from the coarse-to-fine grids the important data structure information on location of the so-called most important fibers in directional unfolding matrices. The method indicates linear complexity with respect to the size of data representing the input tensor. In particular, if the target tensor is given by using the rank-$R$ canonical tensor product model, then our approximation method is proved to have linear scaling in the dimension parameter $d$, in the univariate grid size $n$, and in the input rank $R$. The method is tested by 3D electronic structure calculations. For the multigrid accelerated low Tucker-rank approximation of the all electron densities having strong nuclear cusps, we obtain high resolution of their 3D convolution product with the Newton potential. The accuracy of order $10^{-6}$ in max-norm is achieved on large $n\times n\times n$ grids up to $n=1.6\cdot 10^4$. The total computational time on such grids amounts to several minutes in Matlab implementation.
joint work with Panayot Vassilevski, Lawrence Livermore National Laboratory
We describe a domain decomposition algorithm for use in several variants of the parallel adaptive meshing paradigm of Bank and Holst. This algorithm has low communication, makes extensive use of existing sequential solvers, and exploits in several important ways data generated as part of the adaptive meshing paradigm. We show that for an idealized version of the algorithm, the rate of convergence is independent of both the global problem size N and the number of subdomains p used in the domain decomposition partition. Numerical examples illustrate the effectiveness of the procedure.
A new theory of solutions, the method of energy representation, is introduced by adopting the solute-solvent interaction energy as the coordinate of distribution functions. The density-functional theory is formulated over the energy coordinate, and an approximate functional for the solvation free energy is given in terms of energy distribution functions in the solution and reference solvent systems. The method of energy representation greatly expands the scope of solution theory and is amenable to supercritical fluid, flexible molecules with intramolecular degrees of freedom, inhomogeneous system, and quantum-mechanical/molecular-mechanical (QM/MM) system. Through the combination with molecular simulation, the functional for the solvation free energy is demonstrated to perform well for nonpolar, polar, and ionic solutes in water over a wide range of thermodynamics conditions. As an application to inhomogeneous system involving flexible species, the molecular binding into micelle and membrane is analyzed by treating micelle and membrane as a mixed solvent system consisting of water and amphiphilic molecule.
In this talk, an overview of the basics of image reconstruction in diffuse optical tomography (DOT), a typical computational framework to solve the inverse problem, and some results involving an iteratively regularized Gauss-Newton method will be presented.
Ionic liquids, which are molten salts with melting points below 100 °C, down to – 80°C, are a hot research topic at present. Many applications in chemical engineering and preparative chemistry are envisaged for this new fascinating group of compounds. The interplay of Coulomb interaction and with van der Waals interactions provides a challenge for the theoretical understanding of the special properties of the ionic liquids and of their solutions. Some ionic liquids are soluble in non-polar solvents as hydrocarbons others in polar solvents like water. Vice versa some are insoluble in non-polar others insoluble in polar solvents.
Liquid-liquid phase transitions are observable at ambient temperatures enabling investigations of the critical properties (coexistence, critical fluctuations, critical dynamics) with mK accuracy. Such research is of fundamental interest: While in nonionic systems the liquid-gas as well as liquid-liquid phase transitions are driven by short range van der Waals interactions with an r-6 -range dependence, the phase transitions in the ionic systems are driven by long-range r-1 -Coulomb interactions. The universality hypothesis that liquid-gas as well as liquid-liquid phase transitions all belong to the Ising universality class has been theoretically proven for r-n interactions with n>4.97, while the nature of the critical point in Coulomb systems was unknown.
Some experiments reported mean-field behavior for such systems. Meanwhile, experiments as well as simulations support the conclusion that Coulomb systems also belong to the Ising universality class. The simulations concern the so called restricted primitive model (RPM), which considers equal sized charged hard spheres in a dielectric continuum. The critical points of the liquid-liquid phase transitions in ionic solutions in non-polar solvents are in agreement with the prediction of the RPM. Corresponding state analysis based on the reduced variables of the RPM reveals different behavior, when comparing phase separation in aprotic solvents (hydrocarbons) with that in protic solvents (alcohols , water). In terms of the RPM- variables the phase separation in aprotic solvents, which is driven by Coulomb interactions, have an upper critical solution point, while the coexistence curves in protic solvents have a lower critical solution point, typically for phase separation caused by hydrophopic interactions.
Critical phenomena can leads to promising technologies in many fields such as separation, chemical reaction, material processing, waste treatment, plastic recycling and others. They can provide environmentally friendly and advanced industrial processes with less energy consumption and high efficiency. Supercritical carbon dioxide, water, alcohol were intensively studied because of their unique and suitable solvent properties, safety, small impact on the earth’s environment, and their economic advantages. By employing mixtures of co-solvent/supercritical fluid, one can also use the fluid composition as an additional parameter to fine-tune the properties of processing fluids for a specific application, such as those involved in polymer processing.
Modelling of the critical phenomena is unsolved task of modern physical chemistry. The problems of Kadanoff-Wilson theory, Ornstein-Zernike equation and computer simulations will be discussed in the report. Special attention will be paid for many body correlations of molecules in fluids near the critical point.
1. Introduction to Solvation Phenomena. What is solvation? Why is so difficult to treat solvation? Multiscale origin of the solvation.
2. Key concepts of Integral Equations Theory (IET) for Liquids. Correlation function formalism and interaction potentials. Bogolyubov-Born-Green-Kirkwood-Yvon hierarchy. The Born-Green-Yvon and Ornstein-Zernike (OZ) equations. Closures and bridge functional.
3. IE for simple fluids. Simple (HNC and PY) closures. Analytical methods and numerical schemes for solutions of OZ equation. Bridge reconstruction from simulations. Current status of the theory.
4. IE for molecular liquids. Molecular OZ equation. Problem of closures. Numerical schemes for solutions of molecular OZ equation. Expansion in spherical harmonics and 3D Fast Fourier Transform (FFT). Problems of polyatomic solutes. Interaction site formalism. 1D and 3D Reference Interaction Site Model (RISM). Numerical algorithms for solution of 1D and 3D RISM equations. Current status of the theory.
5. Beyond standard schemes. Nonuniform grids and grid-free methods. Wavelets for simple and molecular liquids. A problem of the bridge construction.
6. Conclusions and perspectives
The fluid density distribution close to a complexly shaped object and corresponding thermodynamic quantities, such as the solvation free energy of that object, are computational expensive due to the lack of spatial symmetry. The morphometric approach presented in this talk is based on the Hadwiger theorem from integral geometry. It proposes a complete separation of the grand potential of the system into geometrical measures, characterizing the complex geometry, and corresponding thermodynamic coefficients, describing the fluid and the wall-fluid interaction. We employ a combination of density functional theory of classical fluids and exact statistical mechanical sum-rules to substantiate the approach.
The complete separation of the grand potential into geometrical measures and thermodynamic coefficients is a generalization of the concept of extensivity and allows one to transfer experience from simple to complex geometries. We introduce the morphometric approach and discuss its application to the calculation of the solvation free energy of proteins.
Meshfree methods have enjoyed a significant research effort in the past 15 years and substantial improvements have been accomplished and many different numerical schemes have been proposed. For instance, Smoothed Particle Hydrodynamics (SPH), Generalized Finite Difference Method (GFDM), Reproducing Particle Kernel Method (RKPM), Element Free Galerkin Method (EFGM), Generalized/eXtended Finite Element Methods (G/XFEM), or Partition of Unity Methods (PUM). The underlying construction principles in many of these scheme however are very similar and stem from scattered data approximation. In this talk we review these construction principles and discuss the approximation properties of the resulting meshfree function spaces.
In particular, we present the Particle-Partition of Unity Method (PPUM) which is a meshfree generalization of the classical finite element method. The PPUM can be employed in an h-version, a p-version and an hp-version. Furthermore, the PPUM supports the use of problem-dependent approximation spaces (i.e. there is a PPUM q-version) and it can be interpreted as a variational multi-scale method. We focus on hp-adaptive refinement of a PPUM discretization, the automatic hierarchical enrichment, and the efficient multilevel solution of the arising linear system. We present numerical results of our multilevel PPUM for the treatment of linear elastic fracture mechanics problems which demonstrate the approximation properties as well as the computational efficiency of the proposed scheme.
The European Centre for Nuclear Research CERN develops and maintains the world's biggest particle accelerator complex. Particles are guided along a circular path by a magnet system, and once per round they are accelerated in so-called cavities. Two counter-rotating particle beams are intersected at maximum kinetic moment. Elementary particles are produced in the collisions, which had existed in our universe only fractions of a second after the Big Bang. The detection of these particles, and thus the validation of theoretical models of the fundamental laws of physics, is the actual goal of the CERN laboratory.
Superconducting magnets for accelerators weigh several dozens of tons and must be manufactured in tolerances below 20 micrometers. Prototypes cost several hundred thousand swiss francs. Comprehensive simulation and mathematical optimization are therefore indispensable for an economic design process. The required relative accuracy of 1e-6 in magnetic-field calculations cannot be achieved with commercially available finite-element software. We show that the use of BEM-FEM coupling is a natural choice for the simulation of superconducting magnets.
The Hartree-Fock equations are of basic interest in ab initio quantum chemistry. They constitute a nonlinear eigenvalue problem, which has to be solved numerically. This means one has to select an efficient basis for discretizing this problem. In the chemical community, the approximation error commited thereby has hardly been analysed. In the last decade, a posteriori error estimation has grown an important tool for the determination of the approximation space for partial differential equations. Often, one is not interested in the error of the approximate solution with respect to some global norm, but rather to improve the error in the value of some target functional computed from it. This is the aim of the so-called Dual Weighted Residual Method, which we want to apply to the Hartree-Fock equations.
Authors: Víctor Domínguez and Ivan G. Graham.
In this work we study the practical implementation of a Galerkin scheme for solving a boundary integral equation appearing in the simulation of high-frequency acoustic plane wave scattering by a 2D convex object. The method, proposed by V. Dominguez, I.G. Graham and V.P. Smyshliaev in 2006, was designed by using all the asymptotic information on the form of the solution and incorporating it in the discrete space.
The method is not robust in $k$, the wave-number, but suffers from a very weak deterioration for increasing values of $k$. Hence, typically an increase of the degrees of freedom as $k^{1/9}$ is enough to preserve the accuracy of the solution as $k$ grows up to infinite.
Regarding the numerical implementation, the number of degree of freedom is usually very small, but any entry of the corresponding matrix requires the evaluation of a double integral of a highly oscillatory function. Hence standard quadrature formulas are prohibitively expensive since the cost of such rules is proportional to $k^2$. In this talk we show how these integrals can be efficiently approximated. Roughly speaking, we proceed as follows. By using suitable changes of variables, we rewrite the oscillating term in a simpler form, namely as a complex exponential which depends only on the variable of the outer integral. Therefore standard quadrature rules can be applied for evaluating the inner integral. The initial problem is then reduced to approximate a single integral of a fast oscillatory function, which is computed using Filon-type quadrature rules. These strategies makes the Galerkin method with quadrature competitive and efficient in the high frequency regimen.
Mathematical models involving uncertain coefficients can often be modelled as stochastic partial differential equations. Typically, such equations are solved by Monte Carlo simulation. This is a robust and easily implementable technique. Unfortunately, its computational complexity rapidly becomes prohibitively expensive when large numbers of simulations are to be performed. As an alternative, a stochastic finite element method has been proposed. It extends the classical finite element method in a very natural way towards the stochastic dimension. In this method the stochastic PDE is transformed into a coupled system of deterministic PDEs. Classical space-time discretization methods can then be applied to convert this into a large set of algebraic equations.
In this work, a multigrid method is presented to solve the algebraic systems that result from stochastic finite element discretizations. The algorithm has very favorable convergence properties, such as a mesh-independent convergence rate. These properties are demonstrated through a local Fourier analysis, applied to the geometric multigrid variant. Also, the algebraic variant is discussed, and numerical results illustrate its convergence behavior. For time-dependent stochastic PDEs, the multigrid method is integrated with a high-order implicit Runge-Kutta time-integration scheme.
Motivated by constrained optimal control problems for PDEs we present a generalized differentiability concept in function space which allows to consider generalized (semismooth) versions of Newtons method for solving nonsmooth operator equations. The local rate of convergence turns out to be q-superlinear. The implementation of the method in terms of a primal-dual active set strategy is discussed and a mesh independence result is established. The talk ends by highlighting several application areas ranging from constrained optimal control of PDEs to problems in image restoration.
Research over the past 25 years has led to the view that the rich tapestry of present-day cosmic structure arose during the first instants of creation, where weak ripples were imposed on the otherwise uniform and rapidly expanding primordial soup. Over 14 billion years of evolution, these ripples have been amplified to enormous proportions by gravitational forces, producing ever-growing concentrations of dark matter in which ordinary gases cool, condense and fragment to make galaxies. This process can be faithfully mimicked in large computer simulations, and tested by observations that probe the history of the Universe starting from just 400,000 years after the Big Bang. In my talk, I will review results from the 'Millennium Simulation', the largest cosmological N-body calculation carried out to date. I will also discuss mathematical and algorithmic aspects relevant for large-scale cosmological simulations.
The simplest construction of an antisymmetric function of many variables is as a Slater determinant. We present a method for approximating the wavefunction with an unconstrained sum of such Slater determinants. Removal of constraints may allow significantly more efficient representations and computations than current methods provide. The resulting representations also present a different perspective from which to understand the wavefunction.
Given a nonsingular signature $J_{2}$ and a causal left invertible operator $G$ by a minimal, u.e.s realization, we state necessary and sufficient conditions in state-space terms under which the indefinite spectral factorization problem $GJ_{2}G^{\ast}=G_{o}J_{x}G_{o}^{\ast}$ has solution for a left-outer $G_{o}$ and another invertible signature $J_{x}$. We also state a single-pass numerically stable algorithm that finds $J_{x}$ and a minimal, u.e.s realization for $G_{o}$.
We consider inversion of a doubly infinite boundedly invertible matrix represented by a sequence of a state-space equations. We show that if the original representation is given in an adequate normal form, then the computation of the representation of the inverse can be done in a single downward or upward pass involving only small, local computations. The results are given in closed form.
The flow through a porous medium is considered in a simple but prototypical setting. Knowledge about the conductivity of the soil, the magnitude of source-terms, or about the in- and out-flow boundary conditions is often very uncertain. These uncertainties inherent in the model result in uncertainties in the results of numerical simulations.
Stochastic methods are one way to model these uncertainties, and in our case we are concerned with spatially varying randomness, and model this by random fields. If the physical system is described by a partial differential equation (PDE), then the combination with the stochastic model results in a stochastic partial differential equation (SPDE). The solution of the SPDE is again a random field, describing both the expected response and quantifying its uncertainty.
SPDEs can be interpreted mathematically in several ways. At the moment we concentrate on randomness in space. If evolution with stochastic input has to be considered, one may combine the techniques described here with the already well established methods in that field.
One may distinguish---as in the case of stochastic ordinary differential equations (SDEs)---between additive and multiplicative noise. As is well known from SDEs, in the case of multiplicative noise one has to be more careful. A similar problem occurs here. Additive noise---particularly for linear problems---is well known and much simpler to deal with, even if the random fields are generalised to stochastic distributions. With multiplicative noise on the other hand the product of a random coefficient field and the solution may have no meaning. As with SDEs, it is a modelling decision how this is resolved.
Additive noise corresponds to the case where the right hand side---the loading or the solution independent source term---is random, whereas when the operator is random, we have multiplicative noise. In the first case it is the external influences which are uncertain, in the latter it is the system under consideration itself. In general, both uncertainties are present.
In an engineering setting, these models have been considered in different fields. Many different kinds of solution procedures have been tried, but mostly Monte Carlo methods have been used. Alternatives to Monte Carlo methods, which first compute the solution and then the required statistic, have been developed in the field of stochastic mechanics, for example perturbation methods, methods based on Neumann-series, or the spectral stochastic finite element-method. The latter expands the input random fields in eigenfunctions of their covariance kernels, and obtains the solution by a Galerkin method in a space of stochastic ansatz functions.
One direction of numerical investigation focuses on computing the moments of the solution. These are very common, but specific response descriptors. Often other functionals of the solution may be desired. Monte Carlo (MC) methods can be used directly for this, but they require a high computational effort. Variance reduction techniques are employed to reduce this somewhat. Quasi Monte Carlo (QMC) methods may reduce the computational effort considerably without requiring much regularity. But often we have high regularity in the stochastic variables, and this is not exploited by QMC methods. The problem of computing such high-dimensional integrals comes up as a subtask also in the stochastic Galerkin method which we pursue. The integrands are often very smooth, and MC and QMC methods do not take much advantage out of this.
For this subtask, we propose sparse grid (Smolyak) quadrature methods as an efficient alternative. These have found increasing attention in recent years.
The stochastic Galerkin methods started from N. Wiener''s polynomial chaos. This has been used extensively in the theoretical white noise analysis in stochastics. This device may of course also be used in the simulation of random fields.
In general, the stochastic Galerkin methods allow a direct representation of the solution. Stochastic Galerkin methods have been applied to various linear problems, using a variety of numerical techniques to accelerate the solution. Recently, nonlinear problems with stochastic loads have been tackled, and some first results for nonlinear stochastic operators have been reached. A convergence theory in has been started, but we are very much at the beginning.
These Galerkin methods allow us to have an explicit functional relationship between the independent random variables and the solution---and this is contrast with usual Monte Carlo approaches, so that subsequent evaluations of functionals---statistics like the mean, covariance, or probabilities of exceedance---are very cheap. This may be seen as a way to systematically calculate more and more accurate "response surfaces".
We consider issues when designing a posteriori eigenvalue/eigenvector bounds for multi-scale problems in mechanics. Issues of cluster stability of eigenvalue estimate will receive a special treatment. This is a prerequisite for any comprehensive analysis of the subspace approximation problem. For a sake of being definite assume that we have constructed a vector $\psi$ which approximates the eigenvector $v$, $\|v\|=1$, $Hv=\lambda v$. Then, typically, one is satisfied with $\psi$ such that $\sin\angle(\psi,v)$ be "tiny". One expects that in such a situation $\psi$ can be used to extract spectral information on $H$. Unfortunately, small $\sin\angle(\psi,v)$ does not always imply that quality information on $\lambda$ can be extracted from $\psi$. Surely an approximate eigenvector which yields insufficient information on the accompanying eigenvalue cannot be considered as a satisfactory solution to a spectral approximation problem. The so-called "standard" or "relative" $\sin\Theta$ theory (Davis--Kahan, Ipsen, Li, Mathias--Veselić...) gives only a partial answer to this problem. Indeed, in a "singular" situation the relative residual is typically large (as it should be for a poor approximation $\psi$) but the estimated quantity $\sin\Theta(\psi,v)$ can be (and in the presented example it will be) very tiny. So we are in a situation in which we have to rely on a possibly very un-sharp inequality to obtain a correct hint on the "accuracy" of $\psi$. We reconsider the approximation problem in the geometry of the "energy" norm $\|\cdot\|_H$. The obtained estimates are an energy norm variant of the Mathias--Veselić eigenvector inequalities. We also present a new class of quadratic cluster robust estimates for eigenvalues. These estimates generalise the work of Drmač and Hari on a relative version of quadratic residual estimates. Furthermore, we show that all of the results hold for positive definite operators of Mathematical Physics both in bounded and unbounded domains. To evaluate the norm of the scaled residual we work in the dual of the "energy" space. Several techniques, which are based on the Green Function (e.g. Feynman--Katz formula, $\mathcal{H}$-inversion ...), will be explored in relation to this a posteriori estimation problem. As an illustration of the theory we will numerically tackle a couple of laboratory examples and compare results with other recent a posteriori eigenvalue estimates from the literature. This is a joint work with Zlatko Drmač and Krešimir Veselić.
A Chebyshev pseudospectral method for simulating second order in time and fourth order in space semilinear initial boundary value problems is introduced. The method uses deferred corrections to obtain fourth order accuracy in time and a spectral integration method which gives sparse matrices and overcomes the numerical instability associated with large Chebyshev differentiation matrices. The method is used to obtain results for some one and two dimensional problems.
We consider the moment problem for an elliptic pde with stochastic data posed in a bounded physical domain. We develop perturbation algorithms of nearly optimal complexity for the moment computation of the stochastic solution, assuming spatial regularity of the random fluctuation in the data. Techniques used include sparse grids, wavelet preconditioning and best N-term approximation for the efficient representation of higher order moments.
This talk is devoted to the fast solution of boundary integral equations on unstructured meshes by the Galerkin scheme. To avoid the quadratic costs of traditional discretizations with their densely populated system matrices it is necessary to use fast techniques such as hierarchical matrices, the multipole method or wavelet matrix compression, which will be the topic of the talk. On the given, possibly unstructured, mesh we construct a wavelet basis providing vanishing moments with respect to the traces of polynomials in the space. With this basis at hand, the system matrix in wavelet coordinates can be compressed to $\mathcal{O}(N\log N)$ relevant matrix coefficients, where $N$ denotes the number of unknowns.For the computation of the compressed system matrix with suboptimal complexity we will present a new method based on the strong similarities of substructures of the $\mathcal{H}^2$ matrices and the used wavelet basis.In the end numerical 3D-results will be presented, which will substantiate the theoretical results.
One of the main difficulties in high-frequency electromagnetic and acoustic scattering simulations is that any numerical scheme based on the full-wave model entails the resolution of wavelength. It is due to this challange that simulations involving even very simple geometries are beyond the reach of classical numerical schemes.\\\\ In this talk, we shall present an analysis of a recently proposed integral equation method that bypasses the need for the resolution of wavelength, and thereby delivers solutions in frequency-independent computational times. Within single scattering configurations, the method is based on the use of an appropriate ansatz for the unknown surface densities and on suitable extensions of the method of stationary phase. The extension to multiple-scattering configurations, in turn, is attained through consideration of an iterative (Neumann) series that successively accounts for multiple reflections. We show that the convergence properties of this series in the high-frequency regime depends solely on geometrical characteristics. Moreover, for periodic orbits, we determine the convergence rate explicitly. Finally, we show that this insight suggests the use of alternative summation mechanisms that can greatly accelerate the convergence of the series.
After a short introduction into machine learning and data mining we will describe how the solution of a number of machine learning methods is represented by means of kernel functions. Typically, these functions have global support and the computation of the exact solution involves a densely populated matrix requiring ${\cal O}(N2)$ units of storage for $N$ data points. We will present first results how ${\cal H}$-matrices can be applied to find data-sparse approximations of this matrix, making large scale machine learning tractable.
For parabolic differential equations, space and time discretization methods, so called full discretizations, are necessary to determine the dynamics on center manifolds. Until now, only very little seems to be known about the convergence of these full discretizations. We show that, allowing stable, center, unstable manifolds, for the standard space discretization methods, the space discrete center manifolds converge to the original center manifolds in the following sense: The coefficients of the Taylor expansion of a discrete center manifold and its normal form converge to those of the original center manifold. Then standard or geometric time discretization methods can be applied to the discrete center manifold system of ordinary differential equations. We prove convergence for these full discretizations and give a short outline for the Hopf-bifurcation as example.
For the first time, we present for the general case of fully nonlinear elliptic differential equations and systems of order $2$ and $2m$ in $\R^n$, a stability and convergence proof for nonstandard finite element, spectral and wavelet methods. We include the necessary quadrature and cubature approximations. Essential tool is a new regularity result for solutions of finite element equations. For this lecture we erstrict the presentation to equations of order two and to FEMs. An important exammple is the Monge-Ampere equation.
Assume $\displaystyle \{a_{i_1,\dots,i_N}\}_{i_1,\dots,i_N=1}^{m_1, \dots, m_N} \in C^{m_1, \dots, m_N}$ is given ($N>2$), we are searching for $r \in N$, $\alpha_1, \dots, \alpha_r > 0$, \item $\displaystyle \forall p=1, \dots, N: B^{(p)} = \{ b_{i_p l} \}_{i_p=1, l=1}^{m_p,r}, ~~ \sum_{i_p}^{m_p} |b_{i_p l}|^2 = 1$, that $$ \forall i_1,\dots,i_N: ~~ a_{i_1,\dots,i_N} \simeq \sum_{l=1}^r \alpha_l b_{i_1 l}^{(1)} \dots b_{i_N l}^{(N)}. $$ This approximation is called the multilinear decomposition and is the generalization of a singular value decomposition ($N=2$) for higher order arrays. The main goal of this talk is to give a short introduction to the theory of the multilinear decomposition, and to emphasize the difference between $N=2$ and $N>2$. Nowadays many practical applications explore multilinear approximation: we can find it in the compression of matrices, data approximation/compression in signal processing, fast algorithms for the matrix-by-matrix multiplication. During the talk several numerical algorithms developed for the multilinear decomposition (http://www.ilghiz.com/) will be demonstrated and we will see how those algorithms work in real time and present their parallel implementations. Those algorithms are stable and robust, it helps us to produce the last multilinear world record of four dimensional ($N=4$) data approximation with 350 skeletons ($r=350$) obtained on industrial data Tugarinov V., Kay L., Ibraghimov I., Orekhov V., High-Resolution Four-Dimensional 1H-13C NOE Spectroscopy Using Methyl-TROSY, Sparse Data Acquisition, and Multilinear Decomposition. J. Am. Chem. Soc. 2005; 127: 2767--2775.
To achieve a deeper understanding of the brain, scientists and clinicians use Electroencephalography (EEG) and Magnetoencephalography (MEG) inverse methods to reconstruct sources in the cortex sheet of the human brain. There exists a persistent uncertainty regarding the influence of volume conduction effects such as the anisotropy of tissue conductivity of the skull and the white matter compartments on EEG and MEG source reconstruction.
In my talk, I will study the sensitivity to anisotropy of the EEG/MEG forward problem for deep and eccentric sources with differing orientation components in a high resolution Finite Element volume conductor. The influence of anisotropy will be presented by both high resolution visualization of field distribution, isopotential-surfaces and return current flow and topography and magnitude error measures. The combination of simulation and visualization provides a deep insight into the effect of head tissue conductivity anisotropy.
We found that for EEG, the presence of tissue anisotropy both for the skull and white matter compartment substantially compromises the forward potential computation and therefore the inverse source reconstruction. Skull anisotropy has a smearing effect on the forward potential computation. In contrast, for the MEG, only the anisotropy of the white matter compartment has an effect. The deeper the source and the more it is surrounded by anisotropic fiber bundles, the larger the effect is. Furthermore, the degree of error resulting from the uncompensated presence of tissue anisotropy depended strongly on the proximity of the anisotropy to the source, remote anisotropy had a much weaker influence than anisotropic tissue that included the source.
Numerical simulation of high frequency acoustic, elastic or electromagnetic wave propagation is important in many applications. When the wavelength is short compared to the overall size of the computational domain direct simulation using standard wave equations is very expensive. Fortunately, there are computationally much less costly models that are good approximations precisely for very high frequencies. In this talk we focus on the geometrical optics approximation which describes the infinite frequency limit of wave equations. We discuss numerical methods for geometrical optics, from the traditional techniques of ray tracing to more recent numerical procedures based on partial differential equations.
Computational Fluid Dynamics (CFD) is a necessary tool in modern mechanical engineering, especially in the aerospace and automotive industry. Solving problems of industrial scale creates large data sets, but without post-processing, many relevant questions can not be answered. Typical questions are vortex formation, vortex shape and position, and attachment or separation of flow on immersed surfaces. These features can be studied by using results from qualitative dynamical system theory like critical points, separatrices, and limit cycles. The talk begins with examples from industry demonstrating the complexity of such features. Topological analysis and visualization based on dynamical systems theory is introduced as a partial answer. The goal is an automatic analysis of CFD data that finds relevant features, calculates their properties and visualizes their position and shape. Towards the end, a notation of local flow is suggested that might open the door for further mathematical treatment of the mentioned flow phenomena.
The task of computing the eigenvalues or invariant subspaces of a matrix usually arises from applications in science or engineering only after a long process of simplifications, discretizations and linearizations. In most cases, the matrices obtained from such a process are highly structured. For example, matrix representations may contain redundancy, often in the form of sparsity, or inherit some of the physical properties from the original problem. This talk surveys some recent results on theory, algorithms, and software for such structured eigenvalue problems. In particular, we will describe a new approach to the structured perturbation analysis of invariant subspaces. This approach provides insight on the invariant subspace sensitivity for various structures including Hamiltonian, skew-Hamiltonian, product, Toeplitz, palindromic, and quaternion eigenvalue problems. Algorithms capable to preserve structures have the potential to compute eigenvalues and invariant subspaces in a more efficient and accurate manner than general-purpose algorithms. We will critically review the question for which structures the development of such algorithms is worthwhile and present new variants of algorithms for Hamiltonian and product eigenvalue problems. The talk will be concluded by HAPACK, a LAPACK-like software package for solving (skew-)Hamiltonian eigenvalue problems, and its application to pseudospectra computations. The work presented in this talk is joint work with Peter Benner, Ralph Byers, Heike Fassbender, Volker Mehrmann, and Emre Mengi.
Radial basis function (RBF) approximation methods are meshfree and can provide spectral accuracy. Therefore, they are attractive to use for solving partial differential equations with smooth solutions. However, one difficulty when using RBF methods is how to select the best method parameters for the problem at hand. The choices to make include the number of node and center points, the placement of the points, the basis function and, if applicable, the shape parameter of the basis functions. In this study, we first look at a simple one-dimensional model problem. We investigate how the optimal method parameters vary with the problem parameters. Experiments show that the behavior is very regular and can, in some cases, be described by simple relations. We show how the optimal fixed shape parameter can be determined. Then we give some theoretical results for the behavior of the solutions in the case of nearly flat basis functions. A comparison with a second order finite difference method is also performed. We find that, even for this simple problem where the finite difference method is very efficient, the RBF method is less computationally expensive if an accurate solution is desired. Finally, we present experiments and results for more complicated Helmholtz problems in two dimensions.
Adaptivity and multilevel approaches are usual in conflict with locality of memory access, causing bad performance on modern computer architectures. We show that the concept of Peano space filling curves based on arbitrarily refined space trees allow an implementation using essentially only stacks as data structures. The number of stacks does not depend on the number of unknowns or on the structure of refinement. Note that accesses to a stack cannot "jump" in the memory; they can move only to neighbouring memory locations, so the number of cache misses is usually very small. Moreover, the memory overhead for the definition of the domain and the refinement structure of the problem is negligible (a few bits for every degree of freedom).This allows the efficient solution of very large problems on relatively small parallel computers.
Over the last 20 years, various approaches have been proposed for post-processing the Galerkin solution of second kind Fredholm Integral equation. These methods include the iterated Galerkin method proposed by Sloan, the Kantorovich method and the iterated Kantorovich method. For an integral operator with a smooth kernel, using the orthogonal projection onto a space of discontinuous piecewise polynomials of degree $\leq r-1$, previous authors have established an order $r$ convergence for the Galerkin solution and $2r$ for the iterated Galerkin solution. Equivalent results have also been established for the interpolatory projection at Gauss points. In this talk, a recently proposed method by the author will be shown to have the convergence of order $4r$. The size of the system of equations that must be solved, in implementing this method, remains the same as for the Galerkin method. Similar results hold for approximate solution of eigenvalue problem.
I will describe the method of self-consistent bounds for dissipative PDEs. On the example of Kuramoto-Sivashinski PDE I will explain how to rigorously integrate forward a dissipative PDE and how using topological tools (for example Brouwer Thm) one can obtain the proof (computer assisted, but rigorous) of the existence of periodic orbits.
In the numerical solution of stiff ordinary differential equation systems there are conflicting aims of accuracy, stability and low implementation costs. One approach to finding efficient methods is the use of diagonally implicit Runge-Kutta methods, where the cost is comparable to sequential use of BDF-like methods. However, these have disadvantages which can be partly overcome bu the use of singly implicit methods. These in turn lead to some new and promising general linear methods whose stability properties are the same as for Runge-Kutta methods.
Within the solution of integral equations of scattering problems, in order to deal with the well-known high frequency problem, we suggest a coupling of two kind of methods. A microlocal discretisation method enables one to reduce efficiently the size of the system considering an approximation of the phase function of the unknown. However, the method needs an expensive precalculation. We then use the fast multipole method in order to speed up this precalculation. The coupling has been very satisfying within an application to the Després's integral equations for acoustic and electromagnetic problems.
Kristallstrukturen werden häufig auf dem Niveau von Koordinationspolyedern untersucht, die Atomkomplexe mit strengen Bindungen beschreiben. Benachbarte Koordinationspolyeder können über Ecken, Kanten oder Flächen miteinander verbunden sein. Wir zeigen, wie sich auf diese Weise gebildete unendliche Polyedernetzwerke durch periodische Graphen endlich repräsentieren lassen. Damit können einige wichtige Eigenschaften von Kristallstrukturen systematisch und vollständig analysiert werden. Im Vortrag werden als Beispiel verschiedene Definitionen von Ringen diskutiert und Algorithmen zu ihrer Bestimmung vorgestellt. In diesen Algorithmen finden auch Methoden aus der Computeralgebra Verwendung.
Unter Ausnutzung der vorhandenen Symmetrien kann für ein Polyedernetzwerk eines Kristalls eine redundanzfreie Graphdarstellung angegeben werden, in der Knoten denjenigen Polyedern entsprechen, die einer asymmetrischen Einheit zugeordnet sind. Kanten werden mit Symmetrieoperationen benannt. Wir zeigen, wie diese Graphdarstellung benutzt werden kann, um für einige interessante Kristallklassen durch geeignete Benennung von Knoten mit Symmetrielagen systematisch alle Polyedertopologien aufzuzählen und bezüglich ihrer Einbettbarkeit im dreidimensionalen Euklidischen Raum unter vorgegebenen Bedingungen zu überprüfen.
Zum Schluss stellen wir ein System vor, das auf der Grundlage der eingeführten Graphdarstellung von Polyedernetzwerken unter Verwendung einer geeigneten Indexierungsmethode die Suche nach isomorphen Teilstrukturen von Kristallen in großen Datenmengen erlaubt.
QCD (Quantum Chromodynamics) is the fundamental physical theory of the quarks as the constituents of matter. Lattice QCD is a discretization of QCD, making QCD tractable computationally where it cannot be handled analytically. Recent advances in lattice QCD have put forward a discretization scheme which for the first time respects the fundamental chiral symmetry of QCD. This overlap fermion model results in operators using the sign function of the (symmetrized) Wilson fermion matrix $Q$ , a 4d cartesian nearest neighbour coupling matrix with stochastic coefficients. In this talk, we focus on the main computational problem associated with overlap fermions: How efficiently solve the basic equation \[ (I + \rho \Gamma_5 \cdot {\rm{sign}}(Q)) x = b \] Here, $\Gamma_5$ is a simple permutation matrix and $0
An algorithm is presented that rapidly solves systems of linear algebraic equations associated with the discretization of boundary integral equations in two dimensions. The algorithm is "fast" in the sense that its asymptotic complexity is $O(N\log N)$, where $N$ is the number of nodes in the discretization. Unlike previous fast techniques based on iterative solvers, the present algorithm constructs a sparse factorization of the inverse of the matrix; thus it is suitable for problems involving relatively ill-conditioned matrices, and is particularly efficient in situations involving multiple right hand sides. The performance of the scheme is illustrated with several numerical examples.
We consider elliptic and hyperbolic PDEs on networked domains such as graphs and 2d networks in threespace. We formulate corresponding optimal control problems and then consider iterative domain decomposition methods (DDM) based on socalled virtual controls. We further discuss homogenization procedures for such problems and discuss possible combinations of DDM and homogenization in the study of complex technological and infrastructural problems.
We present two different methods for achieving high order accurate, unconditionally stable time stepping schemes for the treatment of partial differential equations. Unconditionally stable schemes are necessary for so called stiff problems where stability restrictions lead to an extremely small upper bound for the admissible time step. When considering higher order methods in time, the stability domain is often considerably decreased. The methods we consider are deferred correction methods and time compact methods. They lead to unconditionally stable schemes for arbitrarily high order accuracy.
Coupled-cluster methods for the accurate calculation of excitation energies in the framework of linear response theory (or alternatively the equation-of-motion ansatz) are reviewed. The basic aspects of coupled-cluster response theory will be described, and the high accuracy obtained in the corresponding treatment of excited states, in particular when triple excitations are included, is demonstrated. Furthermore, recent developments such as analytic derivative techniques for the exploration of excited-state potential energy surfaces, treatment of excitation spectra of radicals via a spin-restricted coupled-cluster ansatz, or the computation of transition moments such as, for example, spin-orbit coupling constants, are discussed.
Chalcopyrite disease within sphalerite is a problem of high interest for geologists and mineralogists. After explaining the physical background, two approaches are presented to describe the process. The first is based on Landau theory, in the second it is tried to compute the actual free energies of the physical phenomenon and make simulations closer to reality. For both models, numerical simulations are done and the results are compared.
We consider the numerical analysis for mixed covolume methods--finite volume element methods with mixed element. We present the existence, convergence and superconvergence analysis for the mixed covolume method for elliptic problem with fully diffusion tensor. We also give the analysis of mixed covolume method for parabolic problem and upwind mixed covolume method for convection diffusion problem.
In this talk, a number of recent results will be reported on multiscale and adaptive finite element methods for numerical solutions of partial differential equations, including sharp convergence theory on the general subspace correction iterative methods, two-grid methods for symmetrization, linearization and parallelization, partition of unity methods for nonmatching grids, a posteriori error estimators and grid adaptation involving refinement, coarsening and moving. Several numerical results will also be given, including applications for black-hole and multi-phase flows simulations.
We consider numerical methods for a reduced 2D-model of micromagnetism by A. DeSimone, R.V. Kohn, S. Mueller and F. Otto that describes the response of a soft ferromagnetic thin film to an applied magnetic field. The micromagnetic energy is given as a non-local functional of magnetizations m which are defined as 2D-fields of unit length on the cross section of the film.
Our discretization is based on a conformal Galerkin-Ansatz using Raviart-Thomas elements. A two-step algorithm is presented that solves the minimization problem for the energy numerically. Step 1 consists of an Interior point method for a convex variational problem, and in step 2 a minimizer satisfying the original non-convex constraint |m|=1 is generated as the solution of a Hamilton-Jacobi equation. H-matrices are used in step 1 to deal with non-sparse Hessians due to the non-locality of the energy.
Probleme der Schallausbreitung und -abstrahlung lassen sich mit der Helmholtz-Gleichung und Robin'schen Randbedingungen beschreiben und im niederfrequenten Bereich meist recht gut mit Verfahren wie FEM oder BEM lösen. Bei großen Wellenzahlen, die für industrielle Anwendungen häufig relevant werden, scheitern Verfahren die auf der Diskretisierung von Integralgleichungen basieren, häufig am Rechenaufwand.
In diesem Vortrag wird zunächst das prinzipielle Vorgehen für das Kollokationsverfahren mit diskontinuierlichen Randelementen vorgestellt. Dem schließt sich eine Betrachtung von schnellen Verfahren der Randelementemethode an, die für die Lösung der Helmholtz-Gleichung geeignet sind. Hier sei speziell das schnelle Multipolverfahren genannt. Nach der Diskussion iterativer Lösungsverfahren wird ein Vorkonditionierer für die iterative Lösung präsentiert. Der Vortrag wird mit einem ausführlichen Teil zu akademischen sowie industriellen Anwendungen abgeschlossen. In wesentlichen Teilen basiert der Vortrag auf der Promotionsarbeit von Herrn Stefan Schneider.
Die Entwicklung effizienter Algorithmen für häufig wiederkehrende Grundaufgaben ist ein wesentliches Anliegen der Numerischen Mathematik. Dabei gehört die schnelle Fourier-Transformation (FFT) zu den bekanntesten schnellen Algorithmen. Die d-variaten FFT reduzieren die arithmetische Komplexität der diskreten Fourier-Transformation von 𝓞(N2d) auf 𝓞(Nd log N). Viele Verfahren sind erst durch die Effizienz der FFT von praktischem Interesse, so z.B. Polynom-, Matrizen-, Matrix-Vektor-Multiplikationen, Invertierung grosser strukturierter Matrizen oder trigonometrische Interpolation auf feinen Gittern. In einer Vielzahl von weiteren Anwendungen der FFT wird die Beschränkung auf äquidistante Gitter als Nachteil aufgeführt. In diesem Vortrag stellen wir schnelle Algorithmen zur Berechnung der Fourier-Transformation für nichtäquidistante Daten (NFFT) vor. Im Gegensatz zur FFT ist die NFFT ein approximativer Algorithmus, d.h. der Nutzer kann bestimmen, mit welcher endlichen Genauigkeit das Ergebnis berechnet werden soll. Die hier vorgeschlagene NFFT hat eine arithmetische Komplexität von 𝓞(Nd log N + m dN) , wobei m nur von der geforderten Exaktheit des Ergebnisses abhängt. Auf dieser Grundlage entwickeln wir schnelle Summationsalgorithmen der Form $$f(y_i) :=sumlimits^{N}_{k=1} a_k K (y_i -x_k)\ \mathsf{für}\ j=1,...,M$$ an den Knoten yj,xk ∈ ℝd, wobei K radial-symmetrische Kerne der Form K(x) = K(||x||2) sind. Diese schnellen Algorithmen können zur Interpolation mitradialen Basisfunktionen oder zur numerischen Lösung von Integralgleichungen eingesetzt werden. Abschließend diskutieren wir Anwendungen der NFFT in der Computertomographie und der diskreten Fourier-Transformation auf der Sphäre.
Graph theory was - to a large extent - developed along attempts to solve the four-colour-problem. Already in the early days of radio communication, it was noticed that colouring theory can not only be employed to paint maps but also to make good use of radio channels.
In modern GMS mobile telecommunication systems a large number of communication links have to be established with a limited number of available frequencies (or channels). How should the channels be assigned to the transceivers so that the customers receive high quality service?
A detailed analysis of this problem shows that a proper way to formulate the channel assignment problem is the following: Obeying several technical and legal restrictions, channels have to be assigned to transceivers so that interference is as small as possible. It turns out that this problem can be considered as a list coloring problem with additional side constraints.
I will present several formulations of this problem as integer linear programs, and relate these to standard colouring theory. I will outline a relaxation that leads to a semidefinite program, and I will discuss heuristic and exact algorithms for its solution. Computational results for channel assignment problems of a German cellular phone network operator will be presented.
I will also say a few words about the radio interface of the new UMTS technology which is not too well understood at present.
This lecture is based on efforts of the ZIB telecommunications research group, in particular the work of Andreas Eisenblätter and Arie Koster.
We consider a general framework for analysing the convergence of multilevel solvers applied to finite element discretisations of the Stokes problem, both of conforming and nonconforming type. As a basic new feature, our approach allows to use different finite element discretisations on each level of the multigrid hierarchy. Thus, in our multiple discretisation multilevel approach, higher order finite element discretisations can be combined with fast multi-level solvers based on lower order (nonconforming) finite elements. This leads to the design of efficient multi-level solvers for higher order finite element discretisations.
We consider the Cauchy problem for the first and the second order differential equations in Banach and Hilbert spaces with an operator coefficient A(t) depending on the parameter t. We develop discretization methods with high parallelism level and without accuracy saturation, i.e. the accuracy depends automatically on the smoothness of the solution. For analytical solutions the rate of convergence is exponential. These results can be viewed as a development of parallel approximations of the operator exponential e-fA and of the operator cosine family $\cos \sqrt{A}t$ with a constant operator A from earlier papers possessing an exponential accuracy and based on the Sinc-quadrature approximations of the corresponding Dunford-Cauchy integral representations of solutions or the solution operators.
We consider Helmholtz-type equations with inhomogeneous exterior domains, e.g.~exterior domains containing wave guides, which arise in the simulation of photonic devices. Usually, Sommerfeld's radiation condition is not valid for such problems. We present an alternative radiation condition called pole condition, which has been formulated by Frank Schmidt. Roughly speaking, it says that the Laplace transform of a radiating solution along a family of rays tending to infinity has a holomorphic extension to the lower half of the complex plane. To justify the validity of this condition, we show that it is equivalent to Sommerfeld's radiation condition for bounded obstacle scattering problems. Moreover, we show that for scattering problems by rough surfaces and for wave guide problems, the pole condition is also equivalent to the standard radiation conditions used in these fields.
For the numerical solution of scattering problems by finite element methods, a mesh termination problem has to be solved: What kind of boundary condition can be imposed on the artificial boundary of the computational domain such that the computed solution is not contaminated by spurious reflections at the artificial boundary? Such boundary conditions are called transparent. We describe a construction of exact transparent boundary conditions based on the pole condition which does not rely on the explicit knowledge of a fundamental solution or a series representation of the solution. There exists an explicit formula expressing the exterior solution in a stable way in terms of quantities defined in the Laplace domain. This provides an efficient numerical method for the evaluation of the exterior solution and distinguishes the proposed method from other methods, e.g. the Perfectly Matched Layer (PML) method. The total computational cost is typically dominated by the finite element solution of the interior problem.
Nowadays integration problems with very large numbers of variables arise in many applications, including mathematical finance. While Monte Carlo methods are available for problems with any number of integration variables, increasingly attention has turned to quasi-Monte Carlo methods. These have the appearance of Monte Carlo methods, but with the integration points chosen deterministically instead of randomly. This talk will review the history of quasi-Monte Carlo methods, finishing with recent step-by-step constructions of rules with very large numbers of points and hundreds of variables.
Die Herstellung von Partikeln mit definierten Eigenschaften, wie der Partikelgröße und einer engen Partikelgrößenverteilung, erfordert heute in der Verfahrensentwicklung zeit- und kostenintensive Experimente. Durch die Entwicklung neuer Verfahren sowie leistungsstarker Modelle und Simulationswerkzeuge soll in Zukunft dieser Aufwand reduziert werden. Eine in der chemischen Industrie häufig anzutreffende Verfahrensvariante ist dabei die Fällung in Emulsionstropfen. Diese Reaktionsführung ermöglicht eine Größenkontrolle der Tropfen und damit auch der Partikelgröße. Emulsionen werden je nach Tropfengröße in Makro-, Mini- oder Mikroemulsionen eingeteilt, wobei letztere für die Nanopartikelherstellung geeignet sind. Eine weitere Charakterisierungsmöglichkeit von Emulsionen bietet der Segregationszustand des Systems. Dieser ermöglicht die Übertragung der für die Emulsionen aufgestellten Modelle auf andere fluide Systeme, um so zu einer allgemeineren Betrachtung der Partikelbildung zu gelangen. Bei der Modellierung werden häufig Populationsbilanzen verwendet, durch diese werden die Partikel hinsichtlich einer oder mehrerer Eigenschaften (Tropfengröße, Zusammensetzung, etc.) bilanziert. Im eindimensionalen Fall mit einer Eigenschaftskoordinate ist die Lösung der Populationsbilanz, die innerhalb der Differentialgleichung integrale Terme für den Bruch und Koaleszenz der Tropfen enthält, relativ unproblematisch. Bei der für viele Emulsionen notwendigen Erweiterung auf eine mehrdimensionale Bilanz sind bis heute allerdings keine geeigneten Diskretisierungsverfahren vorhanden, die in einen vertretbaren Zeitraum zu einer stabilen Lösung führen.
High quality 3D models are quickly emerging as a new multimedia data type with applications in such diverse areas as e-commerce, online encyclopaedias, or virtual museums, to name just a few.
This talk presents new algorithms and techniques, for the acquisition and real-time interaction with complex textured 3D objects and shows how these results can be seamlessly integrated with previous work into a single framework for the acquisition, processing, and interactive display of high quality 3D models.
In addition to pure geometry, such algorithms also have to take into account the texture of an object (which is crucial for a realistic appearance) and its reflectance behavior.
The measurement of accurate material properties is an important step towards photorealistic rendering, where both the general surface properties as well as the spatially varying effects of the object are needed. Recent work on the image-based reconstruction of spatially varying BRDFs enables the generation of high quality models of real objects from a sparse set of input data.
Efficient use of the capabilities of advanced PC graphics hardware allows for interactive rendering under arbitrary viewing and lighting conditions and realistically reproduces the appearance of the original object.
We generalize a method by Grigoriadis et al. to compute an approximate solution of the fractional covering (and max-min resource sharing) problem with M nonnegative linear (or concave) constraints fm on a convex set B to the case with general approximate block solvers (i.e. with only constant, logarithmic, or even worse approximation ratios). The algorithm is based on a Lagrangian decomposition which uses a modified logarithmic potential function and on several other ideas (scaling phase strategy, two stopping rules in a phase, eliminating functions fm larger than a threshold value T, reducing the step length and taking a convex combination among different iterates in a phase). We show that the algorithm runs in O(Mepsilon-2ln(Mepsilon-1)) iterations (or block optimization steps), a data and approximation ratio independent bound which is optimal up to poly-logarithmic factors for any fixed relative accuracy epsilon in (0,1). Furthermore, we show how to apply this method for serveral applications (preemptive resource constrained scheduling, scheduling malleable tasks, and fractional weighted graph coloring).
Soft wavelet shrinkage and total variation (TV) denoising are two frequently used nonlinear techniques for denoising signals and images, while preserving their discontinuities. In this talk we analyse conditions under which both methods are equivalent.
First we show that 1-D Haar wavelet shrinkage on a single scale is equivalent to a single step of space-discrete TV diffusion or regularisation of two-pixel pairs. This result is used to prove that a translationally invariant version of this single scale shrinkage can be regarded as a numerical scheme for TV diffusion. Finally, we show that wavelet shrinkage on multiple scales comes down to a single step TV diffusion filtering or regularisation of the so-called Laplacian pyramid of the signal.
This is joint work with Gabriele Steidl (University of Mannheim).
Gekoppelte Systeme von Differentialgleichungen treten in vielen Anwendungen auf. Im Vortrag werden zwei Problemklassen herausgegriffen, zum einen flexible Mehrkörpersysteme in der Fahrzeug- und Maschinendynamik und zum anderen inelastische Verformungen von Kontinua in der Materialforschung. Beide Male liegt ein gekoppeltes System von partiellen Differentialgleichungen und differential-algebraischen Gleichungen vor. Zur numerischen Lösung werden unter anderem Techniken aus der Ortsdiskretisierung von Sattelpunktproblemen und aus der Zeitintegration mit Runge-Kutta-Verfahren eingesetzt. Verschiedene Simulationsbeispiele erläutern die numerischen Methoden.
Bei einer Vielzahl von Fragestellungen aus der Materialwissenschaft bzw. zum Design von Bauelementen steht man vor der Herausforderung, Prozesse zu beschreiben, die sich räumlich und/oder zeitlich über bis zu zehn Größenordnungen gleichzeitig erstrecken. In der Vergangenheit war man daher häufig gezwungen, vereinfachende Modelle zu betrachten. Im Vortrag wird gezeigt, wie eine parameterfreie hierarchische Beschreibung der verschiedenen Skalen durch Kombination von Dichtefunktionaltheorie (zur Beschreibung des elektronischen Vielteilchensystems auf atomarer Skala) mit klassischen Konzepten der Thermodynamik, der Kontinuumsmechanik und der statistischen Physik möglich ist. Leistungsfähigkeit, aber auch (gegenwärtige) Grenzen dieses Zugangs werden an folgenden Fragestellungen diskutiert:
· epitaktisches Wachstum· Selbstorganisation von Nanostrukturen· Berechnung komplexer Phasendiagramme · Simulation/Interpretation von Experimenten (STM, Photoemission, ...)· Struktur von Biomolekülen
Induktives Erwärmen und Härten von Stählen ist ein industriell häufig angewandtes Verfahren. Ein angemessenes Modell zur Simulation der elektromagnetischen Vorgänge ist das sogenannte "Eddy Current Model" im Frequenzbereich. Zu seiner Lösung müssen mehrere nichttriviale BEM-Operatoren ausgewertet werden. Für die vorliegenden komplizierten Geometrien ist die Kompression dieser Operatoren unumgänglich und geschieht hier mit Hilfe der "H² Matrix Approximation durch Interpolation". Ein Überblick über das numerische Verfahren mit speziellem Augenmerk auf die Kompression soll gegeben werden.
Die effiziente und zuverlässige Simulation von reibungsbehafteten Kontaktproblemen in der linearen Elastizität stellt nach wie vor eine Herausforderung dar. Das wesentliche Problem dabei ist die nichtdifferenzierbare Nichtlinearität, die mit durch Nichtdurchdringungsbedingung am Kontaktrand verursacht wird. Verbreitete Ansätze bedienen sich dualer Techniken (Augmented Lagrangian), die auf ein Sattelpunktproblem führen, oder einer Penalty Formulierung, die auf Regularisierungstechniken aufbauen. Hier werden monotone Mehrgitterverfahren als Löser verwendet, die die Nichtlinearität auf natürlichem Wege über Energieminimierung ohne Regularisierung behandeln und für bekannten Kontaktrand zu einem linearen Mehrgitterverfahren degenerieren. Für den linear elastischen Fall ist es dabei notwendig, spezielle Basisfunktionen zu konstruieren, die ein "Gleiten" des Körpers den Kontaktrand entlang erlauben. Die wesentlichen Elemente des Verfahrens für den reibungsfreien Fall werden vorgestellt. Der reibungsbehaftete Fall kann dann auf den reibungsfreien Fall mit vorgegebenen Kontaktdrücken zurückgeführt werden. Für elastische Kontakte wurde in Zusammenarbeit mit B. Wohlmuth (Stuttgart) ein monotones Verfahren fuer Mehrkörperkontakt entwickelt. Der Informationstransfer am Interface wird dabei auf der Basis von Mortarmethoden realisiert. So können auch fuer nichtzusammenpassende Gitter die Verschiebungen und Randspannungen mit hoher Genauigkeit berechnet werden. Wie im Einkörper-Fall ist das monotone Verfahren global konvergent und benötigt keinerelei Regularisierung. Numerische Beispiele in zwei und drei Raumdimensionen illustrieren die Effizienz der Verfahren.
Zur Lösung gemischter Randwertprobleme der linearen Elastostatik werden Randintegralgleichungen in der symmetrischen Formulierung betrachtet. Für das Doppelschichtpotential und den hypersingulären Integraloperator werden durch partielle Integration erhaltene äquivalente Darstellungen benutzt. Zur Beschreibung der Integraloperatoren der Elastostatik werden damit nur das Einfachschichtpotential mit dem Kelvin-Tensor als Fundamentallösung sowie das Doppelschichtpotential des Laplace-Operators benötigt. Ausgehend vom Stokes-Problem wird die Robustheit des Einfachschichtpotentials für fast kompressible Materialien diskutiert. Für die Lösung der diskreten Systeme werden parallele vorkonditionierte Iterationsverfahren sowie Gebietszerlegungsmethoden für gekoppelte Strukturen behandelt. Abschliessend werden numerische Beispiele für komplexe Anwendungsprobleme präsentiert.
In this talk we address some mathematical issues arising from the modelling of blood circulation.
The interest in the use of mathematical modelling and numerical simulation in the study of the cardiovascular system (and its inherent pathologies) has greatly increased in the past few years. Blood flow interacts mechanically and chemically with vessel walls producing a complex fluid-structure interaction problem, which is practically impossible to simulate in its entirety.
Several reduced models have been developed which may give a reasonable approximation of averaged quantities, such as mean flow rate and pressure, in different sections of the cardiovascular system. They are, however, unable to provide the details often needed for understanding a local behavior, such as, e.g., the effect on the shear stress distribution due a modification in the blood flow consequent to a partial vessel stenosis.
We will consider models with different complexity. The most complex model is based on the coupling of the Navier-Stokes equations with structural models for vessel walls. An intermediate model is derived from integrating these equations on a section of a vessel geometry, and it is formed by a system of hyperbolic equations for the evolution of mean pressure and flow rate. An even simpler model that will be considered is based on the solution of a system of ordinary differential equations which describe electrical networks.
The derivation of these models will be presented together with schemes for their numerical solution. Furthermore, we will specifically address the coupling problem.
This heterogeneous approach looks a viable solution to obtain detailed numerical simulation of sections of the cardiovascular apparatus, while properly accounting for the presence of the global system. It is expected that this could open new possibilities for the use of numerical modelling for medical research.
This talk deals with linear and nonlinear ill-posed operator equations and their stabilization by finite-dimensional approximation. After discussing the regularizing effect of projection onto finite dimensional spaces and adressing the question of a posteriori discretization level choice, we consider multigrid methods for such type of problems. Here, due to the adverse eigensystem structure of the forward operator -- small eigenvalues correspond to high frequency eigenfunctions -- appropriate smoothers have to be found. We give theoretical results on level independent contraction factors and V-cycle convergence, that enable us to use the defined multigrid operators in a nested iteration, approaching in a stable way the solution to the original infinite dimensional operator equation. Finally, as an application, the identification of nonlinear B-H curves in magnetics is shown.
It is well known that pseudodifferential equations of negative order considered in the Sobolev space with a small smoothness index are ill-posed. On the other hand, it is known that effective discretization schemes with properly chosen discretization parameter allow to obtain a regularization effect for such equations. The main accomplishment of the present talk is the principle for the adaptive choice of the discretization parameter directly from noisy discrete data. We argue that the combination of this principle with wavelet-based matrix compression technique leads to algorithms which are order-optimal in the sense of complexity.
We describe a numerical approach to solve (systems of) weakly singular and/or singular boundary integral equations defined on a piecewise smooth curve $\Gamma = \bigcup \Gamma_i$, i.e. with corners. It is well-known that these equations have solutions with a singular behaviour at the corners. In particular, we consider mixed boundary value problems (including Dirichlet and Neumann) for the 2D Laplace equation, which we reformulate as a system of boundary integral equations by using the single layer representation of the potential. Knowning a (smooth) parametrization of each arc $\Gamma_i = \{ a_i (t), t \in I_i \}$, our equations take the form: $$c_i z_i (t) + \sum\limits^{M}_{j=1} \int\limits_{I_j} K_{i,j} (s,t) z_j (s) ds = f_i (t)$ , $t \in I_i$ , $i=1,...,M$$ where the constant $c_i$ may be also zero. The functions $z_j (s)$ will be smooth in $I_j$, except possibly at the endpoints. Then, to smooth the endpoint singularities of the solutions $z_j$ we introduce a (nonlinear) smoothing transformation $\gamma$. Introducing this change of variable $\gamma$ and the Chebyshev weight function $\omega$ of the first kind (this last device is crucial to take advantage of some mapping properties of the corresponding weighted single layer operator) we proceed to solve numerically the a system of the type: $$c_i v_i (x) + \sum\limits^{M}_{j=1} \int\limits^{1}_{-1} \overline{K}_{i,j} (x,y) v_j (s) \omega (y) dy = g_i (x)$ , $x \in [-1,1]$ , $i=1,...,M$$ Now all the new unknowns $v_i$ are smooth functions, and we can approximate each one of them by a finite expansion of Chebyshev polynomials of first kind and apply a collocation method based on the zeros of the Chebyshev polynomials. This procedure is applied to several test problems. Our numerical approach has been extended also to corresponding 3D problems. In this last case, a very competitive cubature formulae, based on smoothing transformations too, have been proposed and used to evaluate surface integrals arising when discretizing boundary integral equations via the collocation method.ReferencesMonegato G., Scuderi L. - High order methods for weakly singular integral equations with non smooth input functions, Math. Comp. 67 (224), 1493-1515, 1998;Monegato G., Scuderi L.- Global polynomial approximation for Symm's equation on polygons, Numer. Math. 86 (2000) 655-683.
A new approach to algebraic multilevel methods will be presented. The appoach is based on the observation that sparse approximate inverses typically approximate quite well on a large subspace, but they do not perform that well on the whole space. A natural consequence is to augment the sparse approximate inverse with correction term of smaller rank. This approach leads to an algebraic multilevel method. We will present techniques for the construction of the correction term. An essential part will be an approximate QR decomposition which is used to detect certain columns of the residual matrix. These columns will be used for the correction term.
A new approach to the numerical solution of optimal control problems including control and state constraints is presented. Like hybrid methods, the approach aims at combining the advantages of direct and indirect methods. Unlike hybrid methods, however, our method is directly based on interior-point concepts in function space -- realized via an adaptive multilevel scheme applied to the complementarity formulation and to numerical continuation along the central path. Existence of the central path and its continuation towards the solution point is analyzed in some theoretical detail. An adaptive stepsize control with respect to the duality gap parameter is worked out in the framework of affine invariant inexact Newton methods. Finally, the performance of a proto-type of our algorithm is documented by the successful treatment of the well-known intricate windshear problem treated by Bulirsch et al.
New applications of scientific computing in architecture and medicine require a switch from engineering CAD to Virtual Reality (VR) data structures. In the later much emphasis is on Constructuve Solid Geometry (CSG) and Realistic Rendering is more important than precision. We are currently investigating a new family of methods for solving partial differential equations which are potentially capable of using CSG and VR. These are based on the fictitious domain embedding method, Chimera and Domain Decompositions. Convergence analysis and numerical tests will be presented.
Variational boundary integral equations for Maxwell's equations on Lipschitz surfaces in R³ are derived and their well-posedness in the appropriate trace spaces is established. An equivalent, stable mixed reformulation of the system of integral equations is obtained which admits discretization by Galerkin boundary elements based on standard spaces. On polyhedral surfaces, quasioptimal asymptotic convergence of these Galerkin boundary element methods is proved. A sharp regularity result for the surface multipliers on polyhedral boundaries with plane faces is established.
Bildgebende Verfahren und Simulationen produzieren immer häufiger sehr große Datensätze skalarer 3D-Volumendaten. Beispiele sind Magnetresonanztomographien und Ergebnisse meteorologischer Wettervorhersagemodelle. Bei der Visualisierung derartiger Volumendaten sind effiziente Verfahren zur Handhabung grosser Bilddaten und zur Extraktion der zur Visualisierung selektierten Information einzusetzen. Wir besprechen drei solche Verfahren, die wir in letzter Zeit in Leipzig entwickelt haben. 1. Schnelle lokale Kompressionsverfahren für Random Access: Bei der Visualisierung von Volumenschnitten, Isoflächen oder Teilvolumina von sehr großen Volumendaten ist Datenkompression erforderlich, wobei der schnelle Zugriff auf beliebige Teile des Volumens gewährleistet sein muß. Unser blockbasierter Waveletcoder liefert Ergebnisse, die im PSNR 6-12 dB über dem aktuellen State-of-the-Art liegen. 2. Optimale Extraktion von Isoflächen: Isoflächen werden mit naiver Enumeration des Volumens oder mit aufwendigen Hilfsdatenstrukturen erzeugt. Die erste Methode ist langsam, die zweite benötigt oft mehr Speicherplatz als vorhanden ist. Unser Hybridverfahren arbeitet genau mit dem vorgegebenem Speicherplatz und garantiert, dass dieser mathematisch optimal zur Beschleunigung des Extraktionsverfahren eingesetzt wird. 3. Effiziente Codierung polygonaler Isoflächen: Codierung polygonaler 3D-Modelle ist ein aktuelles Forschungsgebiet der Computergrafik. Die bekannten Methoden sind jedoch nicht effizient für den Fall von Isoflächen. Wir zeigen, daß man die spezielle Struktur von Isoflächen nutzen kann um die zehn- bis zwanzigfache Kompressionsleistung im Vergleich zum State-of-the-Art (MPEG-4) IBM-Coder zu erhalten.
Time-harmonic Maxwell equations, although being a very simple first-order system, lack some of the standard ellipticity properties that are used in finite element analysis. As a consequence, straightforward finite element approximations on nonsmooth domains may converge to a limit different from the exact solution of the problem. In the talk, I shall discuss the reasons for this, give nuumerical examples, and describe recent progress in overcoming this problem by using a modified variational formulation called "weighted regularization".
Spectral methods are a powerful tool for the high-accuracy numerical solution of ODEs and PDEs in simple domains. Sometimes this technology seems complicated, but it is amazing what can be accomplished in a few inches of Matlab based on the simple idea of a Chebyshev collocation derivative. This talk will be an online tour of the forty short programs from my book recently published by SIAM, "Spectral methods in Matlab". Along the way we will solve, always to many digits of precision, problems including the Laplace, Poisson, biharmonic, wave, Mathieu, Orr-Sommerfeld, Airy, Schrodinger, Helmholtz, KdV, and Allen-Cahn equations. The codes are available on the Web, and the longest is 35 lines long.
We consider the problem of solving the discrete Maxwell eigenvalue problem $curl \frac{1}{\mu} curlu = \omega^2 u, \omega >0$, in a closed simply connected cavity $\Omega \subset \mathbb{R}^3 $. For related eigenvalue problems for symmetric second order elliptic operators, efficient iterative schemes for the computation of a couple of the smallest eigenvalues/eigenvectors have been proposed [1]. They are based on a preconditioned inverse iteration and a comprehensive analysis has been presented in [3].In the case of the Maxwell eigenvalue problem the large kernel of the $curl$-operator thwarts the straightforward application of these algorithms. However, when the discretization is based on edge elements, we have an explicit representation of $Kern(curl)$ through gradients of linear finite element functions. This paves the way for a fast approximate projection onto $Kern(curl)^\bot$, which can be coupled with the edge element multigrid scheme developed by the author [2]. Numerical experiments confirm the good performance of this approach for large scale problems.ReferencesJ. BRAMBLE, A. KNYAZEV, AND J. PASCIAK, A subspace preconditioning algorithm for eigenvector/eigenvalue computation, Advances Comp. Math., 6 (1996), pp. 159-189.R. HIPTMAIR, Multigrid method for Maxwell's equations, SIAM J. Numer. Anal., 36 (1999), pp. 204-225.K. NEYMEYR, A geometric theory for preconditioned inverse iteration applied to a subspace, Tech. Rep. 130, SFB 382, Universität Tübingen, Tübingen, Germany, November 1999. Submitted to Math. Comp.
The long-range electronic correlations in a uniform electron gas may be deduced from the random phase approximation (RPA) of Bohm and Pines. We generalize the RPA to nonuniform systems and use it to derive many-electron Slater-Jastrow wave functions. The RPA theory fixes the long-range behavior of the inhomogeneous two-body term in the Jastrow factor and provides an accurate analytic expression for the one-body term. We test the formalism for a strongly inhomogeneous electron gas using variational Monte Carlo simulations.
Das schwach gebundene Quecksilberdiatom wird aufgrund seiner abgeschlossenen inneren Elektronenschalen Edelgasverhalten zugesprochen. Kürzlich zeigten jedoch aufwendige ab-initio Rechnungen, dass neben der Dispersionswechselwirkung ein relativ großer kovalenter Beitrag berücksichtigt werden muss. Die theoretische Beschreibung des Quecksilberdiatoms erweißt sich als schwierig, da das Molekül sehr schwach gebunden ist, jedoch 160 Elektronen beinhaltet deren Beschreibung dazu noch relativistische Korrekturen beinhalten muss ......
Damit die Theoretiker das Quecksilberdiatom als Qualitätstest für ihre Programme verwenden können benötigen sie präzise experimentelle Referenzdaten.
In diesem Vortrag werden hochauflösende Quecksilberdampfmessungen präsentiert und das daraus resultierende Potential sowie die Anisotropie der Polarisierbarkeit vorgestellt und diskutiert.
Broadly speaking, nonlinear approximation theory in linear spaces deals with approximation from nonlinear subsets resp. approximation methods not represented by linear operators. While approximation by rational functions, exponential sums, and splines with free knots are by now classical themes, the more recent concept of nonlinear n-term approximation from dictionaries is general enough to directly relate to modern adaptive computational methods and optimal design methods in various application fields and, at the same time, specific enough to allow for some mathematical treatment.
The talk will introduce the basic definitions, give a series of examples (dimension reduction in computational models, coding, wavelet compression), and, time permitting, report some recent results on the worst-case behavior of Greedy algorithms associated with expansions in Banach spaces.
Modelle für Phasenübergangsprobleme (z.B. das Gefrieren von Flüssigkeiten) können auf stark nichtlineare degeneriert parabolische Gleichungen (wie das Stefan-Problem) oder gekoppelte Systeme mit einem zeitabhängigen Hindernisproblem (Phasenfeld-Gleichungen) führen.
A posteriori Fehlerschätzer für solche Probleme und die wesentlichen Techniken zur Behandlung der Nichtlinearitäten sollen vorgestellt werden, sowie numerische Ergebnisse von adaptiven Finite Elemente Simulationen präsentiert.
Der Vortrag berichtet über gemeinsame Arbeiten mit Z. Chen (Beijing), R.H. Nochetto (College Park) und C. Verdi (Milano)
Structures in matrices on the first place mean that all the entries are expressed through some few parameters. Examples are matrices with certain sparsity patterns, low-rank matrices, Toeplitz matrices, and many others. However, if only the entries are given, an expected structure might be not evident and need to be recovered.
In many situations, one may expect that a matrix A can be represented in the form A=T+R+E with E small in norm, R small in rank (or mosaic rank), and T belonging to some class of structured matrices (for example, Toeplitz matrices, circulants, an algebra of simultaneously diagonalizable matrices). However, since only A is given, there arises a problem of recovering T and R with a prescribed bound on the norm of E. We propose a sufficiently robust algorithm for this kind of problem.
More generally, T and R may belong to some classes of differently parametrized matrices assumed to approximate any matrix with a given accuracy, and now the problem is one of selecting those T and R that minimize the total number of parameters within the same approximation accuracy. Structures related to block mosaic and hierarchical partitioning of matrices are certainly of particular interest for BEM and FEM applications.
Als disperse Systeme werden in der Verfahrenstechnik Mehrphasensysteme bezeichnet, bei denen mindestens eine Phase dispers in Form von Individuen innerhalb einer kontinuierlichen Phase vorliegt. Diese Individuen werden durch charakteristische Eigenschaften beschrieben. Zwischen den Phasen sowie innerhalb der dispersen Phase treten verschiedene Phänomene wie z.B.Wachstum, Keimbildung, Agglomeration und Dispergierung auf.
Disperse Systeme stellen eine wichtige Prozessklasse in vielen technisch relevanten Grundoperationen wie z.B. Polymerisation, Kristallisation, Extraktion und Granulation dar.
In den letzten Jahren hat sich die Methode der populationsdynamischen Modellierung zur Beschreibung disperser Systeme etabliert. Mit Hilfe der populationsdynamischen Modellierung können die verschiedensten dispersen Prozesse in der Verfahrenstechnik beschrieben werden. Das resultierende mathematische Modell basiert hierbei auf einer einheitlichen Gleichungsstruktur. Diese Struktur wird in diesem Vortrag am Beispiel eines Modells für Kristallisationsprozesse dargestellt. Neben der Herleitung der Modellgleichungen wird auch ein Überblick für den Einsatz derartiger Modelle in der Praxis im Rahmen von Regelungen oder Prozessanalysen gegeben.
In this talk we describe and analyse a class of spectral methods, based on spherical harmonic approximation, for second-kind weakly singular boundary integral equations arising from the Helmholtz equation on smooth closed 3D surfaces which are diffeomorphic to the sphere. Our methods are fully discrete Galerkin methods, based on the application of special quadrature rules for computing both the outer and inner integrals arising in the Galerkin matrix elements. For the inner integrals, a variant of the classical product integration procedure is employed to remove the singularity arising in the kernel. With the aid of recent results of Sloan and Womersley on the norm of discrete orthogonal projection operators on the sphere, we prove that our methods are stable and superalgebraically convergent for smooth data.
Our method, although suitable only for a restricted range of 3D surfaces, has applications in the iterative solution of certain inverse problems and when artificial boundaries are employed in the handling of inhomogeneous exterior problems. Our theory includes as a special case a method closely related to one of those proposed by Wienert (1990), which is one of the key fast spectral methods for direct and inverse acoustic scattering in 3D.
Folgende Themen werden behandelt: Mortar Elemente als gemischte Methode innerhalb und ausserhalb Brezzis Theorie Gittertransfer bei Kaskade - wenig Verlust auch im nichtkonformen Fall Lagrange Parameter werden bei cg auf Unterraeumen anders iteriert als direkte Variable.
Zur Lösung gewisser Klassen steifer oder oszillatorische Differentialgleichungen haben sich exponentielle Integratoren bewährt. Diese haben die Eigenschaft, dass lineare Probleme exakt integriert werden.
In einer neueren Arbeit von Munthe-Kaas [2] werden Lie-Gruppen Methoden basierend auf expliziten Runge-Kutta-Verfahren vorgestellt, welche ebenfalls zu exponentiellen Integratoren führen. Die dort vorgeschlagene Konstruktion lässt sich auch ohne den Hintergrund der Lie-Gruppen Methoden anwenden. Die Idee ist, eine geeignete Koordinatentransformation durchzuführen und die transformierte Gleichung mit einem expliziten Verfahren, etwa einem Runge-Kutta-Verfahren, numerisch zu integrieren. Dies ist natürlich nur dann sinnvoll, wenn die transformierte Differentialgleichung nicht mehr steif bzw. nicht mehr oszillatorisch ist. Wir zeigen im Vortrag, wie man solche Transformationen erhalten kann. Die Vorgehensweise ist deshalb attraktiv, da man ein bewährtes explizites Runge-Kutta-Verfahren (wie etwa das Verfahren von Dormand und Prince) auf einfache Art und Weise zu einem exponentiellen Integrator mindestens gleicher Ordnung machen kann.
Im Vortrag werden weitere Eigenschaften, Implementierung und Konstruktion von Verfahren mit höherer Ordnung für differentiell algebraische Gleichungen erläutert. Zudem werden Vergleiche mit der Klasse exponentieller Integratoren aus [1] durchgeführt.
Literatur M. HOCHBRUCK, CH. LUBICH, AND H. SELHOFER, Exponential intergrators for large systems of differential equations. SIAM J. Sci. Comp. 19 (1998), 1552-1574. H. MUNTHE-KAAS, High order Runge-Kutta methods on manifolds, Appl. Numer. Math. 29 (1999), 115-127.
Die mesoskopische Modellierung von Teilchenstroemen mit Hilfe der Boltzmann-Gleichung stellt ein wichtiges Hilfsmittel in einer Reihe von Anwendungsproblemen dar. Demgegenueber ist die Diskretisierung und numerische Loesung der Boltzmann-Gleichung noch heute ein ehrgeiziges und weitgehend unerreichtes Problem. Der Vortrag stellt die neuesten Ergebnisse vor und diskutiert ihre Anwendung auf stationaere Randwertprobleme.
Die bei der Randelement-Methode auftretenden Systemmatrizen sind gross und vollbesetzt. Wenn wir annehmen, dass diese Matrizen durch eine asymptotisch glatte Funktion generiert wurden, kann man zeigen, dass viele Bloecke einen kleinen ε-Rang besitzen. Zu diesem Zweck muessen die Matrizen aber zunaechst passend in diese Bloecke zerlegt werden.
Niedrig-Rang-Matrizen koennen komprimiert gespeichert werden, ausserdem kann man sie effizient mit einem Vektor multiplizieren.Die beste Approximation vom Rang k an eine Matrix ist die Summe über die k groessten Singulaeren Tripel. Die dazu noetige Berechnung der partiellen Singulaerwertzerlegung braucht jeden Eintrag der Matrix, was letztlich zu einem Aufwand von O(N2) führen würde.
Im Vortrag wird ein einfach zu implementierender Algorithmus angegeben, der nur wenige Eintraege der zu approximierenden Matrix benoetigt. Es werden Fehlerabschaetzungen fuer die zur Genauigkeit ε generierte Niedrig-Rang-Approximation angegeben, die zu einem Gesamtaufwand von O(N1+αε-α), α > 0 beliebig klein, fuehren. Weil das Verfahren iterativ ist, ist es moeglich, ein Abbruch-Kriterium zu formulieren.
Am Ende des Vortrags werden einige numerische Beispiele und Vergleiche mit anderen Verfahren gezeigt.
Several models from computational mechanics lead to parameter dependent problems of the form $$\left( A+\frac{1}{\epsilon} B \right) u=f$$ with an elliptic operator $A$, an operator $B$ with non-trivial kernel, and a small, positive parameter $\epsilon$;. We are interested in the construction and analysis of robust multigrid methods for the solution of the arising symmetric and positive definite linear systems. Specific examples considered in this talk are nearly incompressible materials and the Reissner Mindlin plate model. We shortly present robust non-conforming discretization schemes equivalent to corresponding mixed finite element methods. We will explain the necessary multigrid components, namely a block smoother covering basis function of the kernel of $B$, and prolongation operators mapping coarse-grid kernel functions to fine-grid kernel functions. We present the analysis for the two-level algorithm based on the Additive Schwarz Technique. Optimal and robust convergence rates of the multigrid W-cycle and variable V-cycle algorithms are proved by the smoothing property and the approximation property. We have to use norms depending in a proper way on the small parameter ε.Numerical experiments indicate also optimal convergence rates for the V-cycle algorithm for both problems.
In this talk, we discuss a general framework for the formulation and implementation of parallel iterative solvers. The parallelism is based on distributing the initial data, e.g., elements of a mesh, providing a distribution of derived objects, e.g., matrices. In this way, most of sequential iterative algorithms can be formulated and implemented in parallel very naturally with minimal communication overhead providing high parallel efficiency. Moreover, using operator overloading of C++ allows to use identical code sequences, e.g., for the cg-solver, for both, sequential and parallel versions. Numerical results obtained on a SGI Origin 2000 with 60 processors are presented.
In the first part we will give a short introduction to hierarchical matrices (H-Matrices) based on the construction and arithmetics. The second part is devoted to the approximation of operators in the H-Matrix-format. The third part will discuss possible adaptive arithmetics and at last we present numerical results for some elliptic operators.
We present a unifying framework with strongly positive, strongly P-positive operators and the Cayley transform in Banach spaces in order to construct explicit representations of solutions of operator (in particular, differential operator) equations. Such representations are the basis for various new approximations with high (spectral) accuracy. The new concept of P-positive operators appears to be crucial in the effective treatment of the second order differential operator equations and their numerical approximations.
We show how classical evolutionary partial differential equations (e.g. the wave and the Poisson-Darboux equations as well as equations describing linear viscous flows), evolutionary equations with singular integral operator coefficients (describing linear sloshing) as well as Lyapunov and Silvester equations can be treated as special cases of our general theory.
Model-based control and optimization, hitherto suffering from a severe lack of available models, dramatically progresses using Structured Hybrid Models (SHM). Combining rigorous and black-box models with a knowledge-based flowsheet, SHM's are extrapolable, if the flowsheet depicts the real process structure, if the dimension of the data-base can be reduced, and if the model can be identified using overall process data.
Mathematical considerations show that, even for complex flowsheets, SHM extrapolability only depends on the complexity of the constituting black-boxes. Improving flowsheet structure enhances extrapolability and reduces data demand.
SHM's exhibit significant practical advantages over traditional techniques and are open for new fields of application.
The Gaussians belong to a class of functions which are smooth, decay rapidly and have the property that the application of the integral operator to them can be determined analytically. But no linear combinations of these functions reproduce polynomials, such that approximations do not converge, in general. However, one can control the basic functions such that for any prescribed accuracy there exist simple approximants providing high order approximation rates up to this accuracy. This concept of approximate approximations will be discussed for approximations on uniform and nonuniform grids. Applications to the derivation of semi--analytic cubature formulas for multivariate integral operators and the solution of partial differential problems are given.
The software system ALLTYPES provides an environment that is particularly designed for developing computer algebra software in the realm of differential equations, e. g. for solving linear and non-linear ordinary differential equations in closed form. The most important features of ALLTYPES may be described as follows: A set of about fifty parametrized algebraic types is defined. Data objects represented by these types may be manipulated by methods that are attached to more than one hundred polymorphic functions. Reusability of code is achieved by genericity and inheritance. The user may extend the system by defining new types and polymorphic functions. A language comprising seven basic language constructs is defined for implementing mathematical algorithms. The easy manipulation of types is particularly supported by ALLTYPES. To this end a special portion of the language is dedicated to manipulating typed objects, i. e. user-defined or automatic type coercions. Type inquiries are also included in the language. A small amount of parallelism is supported in terms of two language constructs pand and por where the letter p indicates a parallel version of the respective logical function. Currently ALLTYPES is implemented in Reduce, a Macsyma version is to be completed soon. It is the explicit goal that software implemented on top of ALLTYPES should work independent of the underlying computer algebra language.
Many electrical devices like Resonators or Cables exhibit rotational symmetry. When simulating these devices, it is desirable to exploit the symmetry in order to reduce computational effort. The standard approach to this is the transformation to suitable coordinates and subsequent reduction to a two-dimensional problem.
The change of coordinates causes the coefficients of the two-dimensiona l problem to become singular, so standard multigrid techniques do not provide level-independent rates of convergence. To solve this problem, block smoothing procedures and semicoarsening techniques can be used.
For rectangular regions, we can prove convergence estimates that do not depend on the level of the finest grid or on the behaviour of the coefficients in one direction. In the proof, only estimates for one-dimensional Helmholtz problems are needed.
Numerical examples demonstrate that the algorithms work for non-rectangular regions, too.
We consider the radial return algorithm which is commonly used in computational plasticity. We show that this method solves the primal and the dual problem in plasticity. Applying standard results in convex analysis, we show that criteria for the existence, the uniqueness, and for the convergence of discrete approximations can be easily derived.
This is applied to static and quasi-static plasticity. Using the transformation method of internal variables introduced by Alber, we show that the method can be analysed in the same way in case of nonlinear hardening.
The hp-version of the finite element method (hp-FEM) is applied to a singularly perturbed reaction-diffusion model problem of the form -ε2Δu + u = f in curvilinear polygonal domains Ω. For piecewise analytic input data it is shown that the hp-FEM can lead to robust exponential convergence on appropriately designed meshes, i.e., the rate of convergence is exponential and does not deteriorate as the perturbation parameter ε tends to zero. These meshes consist of one layer of needle elements of width O(pε) at the boundary (here, p is the polynomial degree employed) and geometric refinement toward the vertices of the domain.
The key ingredients of the convergence analysis will be presented. It is based on new, sharp analytic regularity results for the solution u in which the critical parameters (ε, distance to the boundary δΩ, distance to the vertices of Ω) enter explicitly.
Numerical examples will illustrate the robust exponential convergence of the hp-FEM. Additionally, the effect of numerical quadrature is considered, particularly for quadrature rules with 'minimal' number of points as used in spectral methods.
Recently, biorthogonal wavelet bases have been constructed to handle a wide variety of surfaces, domains and boundary conditions. I want to talk about two applications of this basis functions.
In the first part I want to give a survey of matrix compression for numerical treatment of boundary integral operators.
In the secaond part I want to present wavelet least squares formulations based on norm equivalences in negative Sobolev norms.
Roughly speaking, one can distinguish between two different approaches for the approximate solution of the inverse obstacle scattering problem for time-harmonic waves. In a first group of methods, the inverse obstacle problem is separated into a linear ill-posed part for the reconstruction of the scattered wave from its far field pattern and a nonlinear well-posed part for finding the location of the boundary of the scatterer from the boundary condition for the total field. In a second group of methods, the inverse obstacle problem is either considered as an ill-posed nonlinear operator equation or reformulated as a nonlinear optimization problem.
We will present and review some mathematical foundations for methods of the second group such as regularized Newton iterations. This will be done both for the basic problem to recover the scatterer from the far field pattern for one incident direction and all observation directions and for modifications with reduced data, i.e., reconstructions from limited-aperture observations, reconstructions from the amplitude of the far field, or reconstructions from backscattering data.
In this talk, we analyze multilevel algorithms without a smoothing on the whole finite element space. Instead, it is used a subspace correction on a complementary space. Here, the fine-grid space is a direct sum of the coarse-grid space and the complementary space. The convergence rate of such a multilevel algorithm depends on the constant in the strengthened Cauchy-Schwarz inequality between the coarse-grid space and the complementary space. Small constants imply a fast multilevel algorithm. Such constants can be obtained, if the complementary space is spanned by prewavelets or generalized prewavelets. Now, semi-coarsening and line relaxation lead to a fast and robust multilevel algorithm for a certain class of anisotropic elliptic equations and convection-diffusion equations. To obtain robustness with respect to discontinuities in x- and y-direction, one has to apply problem dependent generalized prewavelets. We explain how to construct such functions and present some numerical results.
Parameter Identification and Design Optimization require the differentiation of residual functionals with respect to very large numbers of variables. These objective functions are often evaluated by extensive simulation codes involving numerical iterations or integrations over many (pseudo-) time steps. The main difficulty in computing the objective gradient by the otherwise highly efficient adjoint method is the reversal of the forward simulation within a reasonable amount of internal and external memory.
The execution of a general programs generates a tree of run-time subroutine calls. Each subroutine may be reversed in one of two modes, split or joint. The spatial and temporal complexity of the resulting reversal schedules are determined by the height, width and other topological properties of the calling tree. We present reversal strategies that are provably optimal on single and multiple processor machines. This result applies only to certain calling trees structures but all other trees can be rewritten into a suitable form. The size of reversable problems grows exponential in the available resources: memory, run-time and number of processors.
Gebietszerlegungsmethoden sind eine Klasse iterativer Verfahren zur numerischen Lösung von Randwert- bzw. Anfangs- Randwertaufgaben für partielle Differentialgleichungen und für Systeme derartiger Gleichungen. Die zugrundeliegende Idee besteht in einer Zerlegung des Rechengebietes in Teilgebiete und einer Diskretisierung der Teilgebietsprobleme bei Verwendung finiter Elemente oder anderer Diskretisierungstechniken unter besonderer Berücksichtigung der Kopplung der Teilgebietsprobleme über die Teilgebietsränder. In algorithmischer Hinsicht sind diese Verfahren von partikulärem Interesse insofern, als sie die Möglichkeit zur Implementation auf Rechnern paralleler Architektur bieten.
Wir betrachten Gebietszerlegungsverfahren bezüglich nichtüberlappender, geometrisch konformer Zerlegungen des Rechengebietes unter Verwendung individueller Finite-Elemente-Diskretisierungen der Teilgebietsprobleme ungeachtet der Situation auf den Teilgebietsrändern, so dass typischerweise eine global nichtkonforme Approximation vorliegt. Zur Gewährleistung der Konsistenz bedarf es der Realisierung schwacher Stetigkeitsbedingungen auf den Teilgebietsrändern vermöge geeignet gewählter Lagrangescher Multiplikatoren.
Die Schwerpunkte der Ausführungen liegen auf der Konstruktion derartiger Multiplikatoren für elliptische Randwertprobleme in zwei und drei Raumdimensionen, der iterativen Lösung der resultierenden Sattelpunktprobleme unter Verwendung von Multilevelstrukturen,der Realisierung adaptiver Gitteranpassungen auf der Grundlage effizienter und zuverlässiger a posteriori Fehlerschätzer. Betrachtet werden die numerische Lösung von Randwertproblemen für elliptische Differentialoperatoren 2. Ordnung bei Diskretisierung durch Lagrangesche Finite Elemente und von Randwertproblemen im Zusammenhang mit der Berechnung quasistationärer elektromagnetischer Felder bei Verwendung curl-konformer Kantenelemente.
Numerische Ergebnisse illustrieren sowohl die asymptotische Optimalität und parallele Effizienz der Lösungsverfahren als auch die Effektivität der verwendeten Fehlerschätzer.
The boundary integral method is a common component in solution strategies for elliptic (and sometimes parabolic) PDEs, especially when classical PDEs have to be solved on domains with complicated and/or infinite boundaries. For more general nonlinear problems the boundary integral method can often be coupled with standard finite elements as part of an efficient composite solution strategy. The main advantage of the boundary integral equation approach is that one has only to perform discretisation over the 2D boundary of a 3D domain, whereas the main disadvantage is that the resulting stiffness matrix is full and contains (singular) surface integrals. The latter fact poses problems both for matrix setup and system solution. In recent years much progress has been made on the derivation of fast iterative methods for solving the system but less has been done on the computation of the matrix itself. In many modern codes the formation of the stiffness matrix can form a dominant part of the computation time. For N degrees of freedom the cost of assembling the complete matrix is CN2 with C moderately large and depending on the number of kernel evaluations used by the integration routine.
In this work we describe a new fast quadrature method for approximating the stiffness matrix which reduces this cost. For example, in the case of continuous piecewise linear basis functions on a mesh with N nodes, the new method computes the stiffness matrix using N2 + O(N) kernel evaluations. The N2 term comes from the fact that most of the matrix is computed using novel quadrature rules which just use kernel evaluations at pairs of nodes, with the lower order term coming from conventional rules applied to the small remaining part of the matrix. The computed (approximate) stiffness matrix is guaranteed to be sufficiently accurate so as to preserve the known theoretical convergence properties of the numerical method.
In this talk we will describe the essential theoretical ideas behind this method allowing us to prove a number of theorems governing its stability, convergence and complexity. The analytical tools required include novel inverse estimates for finite element functions on general mesh sequences and the theory of polynomial interpolation at arbitrary points in the plane.
We also describe the practical implementation of the method and illustrate this with examples involving several boundary integral equations on a selection of smooth and non-smooth surfaces. The numerical experiments show that the theoretical predictions are realised even in the case of the strongly anisotropic meshes commonly used for handling edge singularities in 3D.
If large differences in scales occur in a problem modeled by differential equations this generally leads to difficulties in numerical algorithms approximating their solutions. These difficulties are generally refered to as the occurence of stiffness.
Systems of hyperbolic conservation laws may involve widely varying characteristic speeds. Also they could be coupled to other physical mechanisms introducing further scales. A typical case are reactive flows involving stiff source terms. The talk will focus on issues such as wrong shock speeds, unphysical states, large time steps and solution adaptivity.
The efficient numerical modelling of laminated composites has received increasing attention in recent years - examples are sandwich plates and shells, fiber-reinforced composites and the like.
While the global response to external loadings can be reliably assessed on the basis of averaged, or homogenized, models, interlamina stresses which are critical for specimen failure can only be obtained by resolution of the small scales of the problem.
This requirement of SCALE RESOLUTION and the low solution regularity at the matrix-composite interface seems to conflict with the use of high order FE approaches which is commonly based on large elements with polynomial shape functions of high order.
We present a new approach which is able to exploit the microstructure of the material. We prove that this approach realizes exponential convergence independently of ε The approach generalizes the classical asymptotic approach and is based on a spectral homogenization technique.
Numerical results confirm the theoretical analysis.
The work is joint with A.M. Matache and I. Babuska.
The lecture will present new results on computation of lower and upper bounds of the error in the computed data of interest. The bounds are guaranteed, are not asymptotic and can be made arbitrarily close by cheap iterations. The talk will also mention the Sleipner Accident where insufficiently accurate finite element computations led to the collapsing of drilling tower in the Nord See.