Neurodynamic Optimization: Models and Applications

  • Jun Wang (Dept. of Mechanical & Automation Engineering, The Chinese University of Hong Kong, China)
A3 01 (Sophus-Lie room)


Optimization problems arise in a wide variety of scientific and engineering applications. It is computationally challenging when optimization procedures have to be performed in real time to optimize the performance of dynamical systems. For such applications, classical optimization techniques may not be competent due to the problem dimensionality and stringent requirement on computational time. One very promising approach to dynamic optimization is to apply artificial neural networks. Because of the inherent nature of parallel and distributed information processing in neural networks, the convergence rate of the solution process is not decreasing as the size of the problem increases. Neural networks can be implemented physically in designated hardware such as ASICs where optimization is carried out in a truly parallel and distributed manner. This feature is particularly desirable for dynamic optimization in decentralized decision-making situations.

In this talk, we will present the historic review and the state of the art of neurodynamic optimization models and applications in winners take all, support vector machine learning and robot kinematic control and joint torque optimization. Specifically, starting from the motivation of neurodynamic optimization, we will review various recurrent neural network models for optimization including quite a few developed by the presenter and his associates. Theoretical results about the stability and optimality of the recurrent neurodynamic optimization models will be given along with many illustrative examples and simulation results. It will be shown that many computational problems, such as sorting, routing, winner-take-all and support vector machine learning, can be readily solved by using the neurodynamic optimization models.