Computing the condition number of tensor decompositions through Tucker compression

  • Nick Dewaele (KU Leuven)
Live Stream


Several computational problems in data science and signal processing can be understood as finding a decomposition of a tensor. In contrast to matrices, tensor decompositions are often unique with minimal constraints. This property makes it possible to have a unique interpretation of the decomposition of the tensor. This is the case for decompositions like the (symmetric) tensor rank and block term decomposition. One caveat, however, is that the decomposition may be highly sensitive with respect to the tensor. This sensitivity is measured by the condition number, which determines up to what precision the decomposition can be known. We will discuss an algorithm to compute the condition number of several of tensor decompositions. In practical computations, one often represents a tensor in a minimal subspace prior to computing its decomposition. This dimensionality reduction technique, known as Tucker compression, can dramatically speed up the computation of the decomposition. Thus, it is natural to ask whether this technique changes to condition number. In recent work, we showed that the answer is negative. We illustrate how this result can be used to speed up the computation of the condition number by several orders of magnitude.