

Preprint 4/2018
Numerical Tensor Techniques for Multidimensional Convolution Products
Wolfgang Hackbusch
Contact the author: Please use for correspondence this email.
Submission date: 12. Jan. 2018
Pages: 21
published in: Vietnam journal of mathematics, 47 (2019) 1, p. 69-92
DOI number (of the published article): 10.1007/s10013-018-0300-4
Bibtex
MSC-Numbers: 15A69, 15A99, 44A35, 65T99
Keywords and phrases: tensorisation, convolution, tensor representation, hierarchical representation
Download full preprint: PDF (539 kB)
Abstract:
In order to treat high-dimensional problems, one has to find data-sparse representations. Starting with a six-dimensional problem, we first introduce the low-rank approximation of matrices. One purpose is the reduction of memory requirements, another advantage is that now vector operations instead of matrix operations can be applied. In the considered problem the vectors correspond to grid functions defined on a three-dimensional grid. This leads to the next separation: these grid functions are tensors in ℝn ⊗ ℝn ⊗ ℝn and can be represented by the hierarchical tensor format. Typical operations as the Hadamard product and the convolution are now reduced to operations between ℝn vectors.
Standard algorithms for operations with vectors from ℝn are of order 𝒪(n) or larger. The tensorisation method is a representation method introducing additional data-sparsity. In many cases the data size can be reduced from 𝒪(n) to 𝒪(log n). Even more important, operations as the convolution can be performed with a cost corresponding to these data sizes.