Complex tensors almost always have best low-rank approximations
Yang Qi, Mateusz Michałek, and Lek-Heng Lim
Contact the author: Please use for correspondence this email.
Submission date: 06. Dec. 2017
published in: Applied and computational harmonic analysis, 49 (2020) 1, p. 180-207
DOI number (of the published article): 10.1016/j.acha.2018.12.003
with the following different title: Complex best r-term approximations almost always exist in finite dimensions
MSC-Numbers: 15A69, 41A50, 41A52, 41A65, 97N50, 51M35
Link to arXiv: See the arXiv entry of this preprint.
Low-rank tensor approximations are plagued by a well-known problem — a tensor may fail to have a best rank-r approximation. Over ℝ, it is known that such failures can occur with positive probability, sometimes with certainty: in ℝ2×2×2, every tensor of rank 3 fails to have a best rank-2 approximation. We will show that while such failures still occur over ℂ, they happen with zero probability. In fact we establish a more general result with useful implications on recent scientiﬁc and engineering applications that rely on sparse and/or low-rank approximations: Let V be a complex vector space with a Hermitian inner product, and X be a closed irreducible complex analytic variety in V . Given any complex analytic subvariety Z ⊆ X with dimZ < dimX, we prove that a general p ∈ V has a unique best X-approximation πX(p) that does not lie in Z. In particular, it implies that over ℂ, any tensor almost always has a unique best rank-r approximation when r is less than the generic rank. Our result covers many other notions of tensor rank: symmetric rank, alternating rank, Chow rank, Segre–Veronese rank, Segre–Grassmann rank, Segre–Chow rank, Veronese–Grassmann rank, Veronese–Chow rank, Segre–Veronese–Grassmann rank, Segre–Veronese–Chow rank, and more — in all cases, a unique best rank-r approximation almost always exist. It applies also to block-terms approximations of tensors: for any r, a general tensor has a unique best r-block-terms approximations. When applied to sparse-plus-low-rank approximations, we obtain that for any given r and k, a general matrix has a unique best approximation by a sum of a rank-r matrix and a k-sparse matrix with a ﬁxed sparsity pattern; this arises in, for example, estimation of covariance matrices of a Gaussian hidden variable model with k observed variables conditionally independent given r hidden variables.