We have decided to discontinue the publication of preprints on our preprint server end of 2024. The publication culture within mathematics has changed so much due to the rise of repositories such as ArXiV (www.arxiv.org) that we are encouraging all institute members to make their preprints available there. An institute's repository in its previous form is, therefore, unnecessary. The preprints published to date will remain available here, but we will not add any new preprints here.
We present ways of defining neuromanifolds – models of stochastic matrices – that are compatible with the maximization of an objective function (reward in reinforcement learning, predictive information in robotics, information flow in neural networks). Our approach is based on information geometry and aims at the reduction of model parameters with the hope to improve gradient learning processes. We discuss advantages and shortcomings of this approach.