Search

Talk

System Identification, Forecasting and Control with Neural Networks

  • Hans Georg Zimmermann (Siemens AG, Corporate Technology, Munich)
A3 01 (Sophus-Lie room)

Abstract

The talk is organized along a correspondence principle between equations, architectures and local algorithms. After a short remark on feedforward neural networks and learning we will focus on recurrent neural nets for state space modeling.

First we will discuss the modeling of open dynamical systems, error correction neural networks, dynamical systems on manifolds and feedback control of observed systems.

Second we will continue with closed dynamical systems. Here we understand the dynamics of interest as a very large system which is only partially observable. The related neural network models have very large state spaces ( dim(state) > 300 ) which creates learning and stability problems. In solving these problems we have to skip several standards of regression theory and develop a new view on uncertainty in forecasting.

Finally, we focus on dynamical systems which are generated by human interaction, e.g. markets. Such systems do not only have a causal mechanics, since they are at least partly generated by utility maximization - even if we do not know the utility function explicitly. Now, the question arises how we can identify and exploit the causal and the utility driven parts of the dynamics.

At Siemens Corporate Technology we have 24 years of experience in research in neural networks, related software development and real-world applications.