Abstract for the talk on 03.07.2020 (17:00 h)Math Machine Learning seminar MPI MIS + UCLA
Kai Fong Ernest Chong (Singapore University of Technology and Design)
The approximation capabilities of neural networks
See the video of this talk.
The universal approximation theorem says that a standard feedforward neural network with one hidden layer is able to uniformly approximate any continuous multivariate function f to any given approximation threshold ε, provided that the activation function is continuous and non-polynomial. In this talk, we shall give a quantitative refinement of the theorem via a direct algebraic approach. In particular, when f is polynomial, we give an explicit finite upper bound (independent of ε) for the number of hidden units required. We shall discuss how ideas from algebraic geometry, algebraic combinatorics and approximation theory are combined in our algebraic proof.