A decomposable tensor is defined as the outer product between vectors. Any
tensor can be decomposed as a sum of such decomposable tensors. The minimal
number of terms in this sum defines the tensor rank. This decomposition has
received several names in the literature, including: Canonical or Polyadic
decomposition, or Parafac. We shall refer to it as the CP decomposition.
The calculation of the CP decomposition is generally difficult, and
requires iterative algorithms whose convergence is not always proved.
In the matrix case, it is known that singular triplets can be computed one
by one. The reason for this is that the dominant singular triplet, which
coincides with the bestrankone approximate, can be subtracted to the
original matrix yielding another matrix with a rank decreased by one. This
deflation procedure can be repeated until all singular triplets are
computed. Essential uniqueness is achieved thanks to orthogonality among
singular vectors.
Such a deflation procedure does not apply to tensors of order larger than
two, for at least two reasons. First, the tensor rank can exceed its
dimensions, and no orthogonality may be assumed. Second, subtracting a best
rankone approximate does decrease tensor rank, perhaps surprisingly. This
makes sense, because a best lowrank approximate does not always exist, for
topological reasons.
However, it is also shown in this talk that when tensors enjoy an
underlying structure, such as banded, Toeplitz or Hankel, then the
decomposition can be reduced to a finite sequence of best rankone
approximations.
