Solving iterative roots and other functional equations with neural networks

  • Lars Kindermann (RIKEN Brain Science Institute, Lab for Mathematical Neuroscience)
A3 02 (Seminar room)


Given some function f(x), a solution g of the functional equation g(g(x))=f(x) is called an iterative root of f. This functional equation can be mapped to the topology of a neural network which is consequently able to find approximate solutions for g. Algebraic methods struggle with this problem even on simple functions, try g(g(x))=x2+1 for example. Applications range from embedding discrete time data into continuous time models to the modelling of certain industrial processes.

Since Emmi Noether we know that many fundamental laws of nature can take the form of functional equations. Many of these can be translated into the structure of neural networks, too. These networks then embody the corresponding theory and may have the inherent capability to model respective systems from a very limited set of training examples. Their generalization capability within the appropriate domain should go beyond that of traditional artificial neural networks because they are not "universal approximators" anymore but will take the specified laws of nature in account for their predictions.