Search

Workshop

Tensor Decomposition using Variational Bayesian Inference

  • Morten Mørup (Technical University of Denmark, Lyngby, Denmark)
E1 05 (Leibniz-Saal)

Abstract

Variational Bayesian inference has become a prominent framework within machine learning to account for parameter uncertainty rather than relying on maximum likelihood point estimates. In this talk it will be demonstrated that modeling uncertainty in tensor decomposition can have a profound impact in terms of 1) the tensor model's ability to predict missing data particularly when dealing with sparsely observed data, and 2) providing robustness to model misspecification when the number of components are incorrectly specified. This will be highlighted in the context of three prominent tensor decomposition approaches; the (non-negative) PARAFAC/Canonical Polyadic Decomposition (CPD), the PARAFAC2 model, and the Tensor Train Decomposition (TTD). The talk will further outline ongoing efforts in generating a general probabilistic n-way toolbox. The approaches are examined considering applications within neuroimaging and chemometrics.

Saskia Gutzschebauch

Max-Planck-Institut für Mathematik in den Naturwissenschaften Contact via Mail

Evrim Acar

Simula Metropolitan Center for Digital Engineering

André Uschmajew

Max Planck Institute for Mathematics in the Sciences

Nick Vannieuwenhoven

KU Leuven