Search

Workshop

Deep Neural Networks in Representation Learning and Tensor Factorization

  • Volker Tresp (LMU Munich, München, Germany)
E1 05 (Leibniz-Saal)

Abstract

Representation learning is hugely successful in natural language processing and knowledge graph modelling. The basic concept is that entities or words are mapped to latent vectors of a given dimension and a shallow or a deep neural network then maps those representations to one or several probabilistic statements. We discuss relationships between representation learning and tensor factorization. We show how deep neural networks can be used to learn representations for new entities and new events in the industrial TIA selection tool and in visual perception. We will also discuss how tensor-trains can be used in combination with recurrent neural networks for video classification.

Saskia Gutzschebauch

Max-Planck-Institut für Mathematik in den Naturwissenschaften Contact via Mail

Evrim Acar

Simula Metropolitan Center for Digital Engineering

André Uschmajew

Max Planck Institute for Mathematics in the Sciences

Nick Vannieuwenhoven

KU Leuven