Abstract of Jochen Garcke

Sums of separable functions for classification
We present an algorithm for learning a classification function of many variables from scattered data. The function is approximated by a sum of separable functions, following the paradigm of separated representations, a.k.a. canonical decomposition. It is well known that the least squares error is not the first choice as a loss function for classification problems. Therefore we consider as well a huberized version of the hinge loss and log-likelihood estimation in this setting. We present experimental results with these different loss functions and different strategies for the resulting minimization problems.

Organisers

Lars Grasedyck (MPI Leipzig, Germany)
Wolfgang Hackbusch (MPI Leipzig, Germany)
Boris Khoromskij (MPI Leipzig, Germany)