We have decided to discontinue the publication of preprints on our preprint server as of 1 March 2024. The publication culture within mathematics has changed so much due to the rise of repositories such as ArXiV (www.arxiv.org) that we are encouraging all institute members to make their preprints available there. An institute's repository in its previous form is, therefore, unnecessary. The preprints published to date will remain available here, but we will not add any new preprints here.
MiS Preprint
4/2018
Numerical Tensor Techniques for Multidimensional Convolution Products
Wolfgang Hackbusch
Abstract
In order to treat high-dimensional problems, one has to find data-sparse representations. Starting with a six-dimensional problem, we first introduce the low-rank approximation of matrices. One purpose is the reduction of memory requirements, another advantage is that now vector operations instead of matrix operations can be applied. In the considered problem the vectors correspond to grid functions defined on a three-dimensional grid. This leads to the next separation: these grid functions are tensors in $\mathbb{R}^{n}\otimes\mathbb{R}^{n}\otimes\mathbb{R}^{n}$ and can be represented by the hierarchical tensor format. Typical operations as the Hadamard product and the convolution are now reduced to operations between $\mathbb{R}^{n}$ vectors.
Standard algorithms for operations with vectors from $\mathbb{R}^{n}$ are of order $\mathcal{O}(n)$ or larger. The tensorisation method is a representation method introducing additional data-sparsity. In many cases the data size can be reduced from $\mathcal{O}(n)$ to $\mathcal{O}(\log n).$ Even more important, operations as the convolution can be performed with a cost corresponding to these data sizes.