Plenary talk of Pierre Comon (Université Nice)

Tensor decomposition can reduce to rank-one approximations
A decomposable tensor is defined as the outer product between vectors. Any tensor can be decomposed as a sum of such decomposable tensors. The minimal number of terms in this sum defines the tensor rank. This decomposition has received several names in the literature, including: Canonical or Polyadic decomposition, or Parafac. We shall refer to it as the CP decomposition. The calculation of the CP decomposition is generally difficult, and requires iterative algorithms whose convergence is not always proved. In the matrix case, it is known that singular triplets can be computed one by one. The reason for this is that the dominant singular triplet, which coincides with the best-rank-one approximate, can be subtracted to the original matrix yielding another matrix with a rank decreased by one. This deflation procedure can be repeated until all singular triplets are computed. Essential uniqueness is achieved thanks to orthogonality among singular vectors. Such a deflation procedure does not apply to tensors of order larger than two, for at least two reasons. First, the tensor rank can exceed its dimensions, and no orthogonality may be assumed. Second, subtracting a best rank-one approximate does decrease tensor rank, perhaps surprisingly. This makes sense, because a best low-rank approximate does not always exist, for topological reasons. However, it is also shown in this talk that when tensors enjoy an underlying structure, such as banded, Toeplitz or Hankel, then the decomposition can be reduced to a finite sequence of best rank-one approximations.

Organisers

Lars Grasedyck (MPI Leipzig, Germany)
Wolfgang Hackbusch (MPI Leipzig, Germany)
Boris Khoromskij (MPI Leipzig, Germany)