Search

MiS Preprint Repository

We have decided to discontinue the publication of preprints on our preprint server as of 1 March 2024. The publication culture within mathematics has changed so much due to the rise of repositories such as ArXiV (www.arxiv.org) that we are encouraging all institute members to make their preprints available there. An institute's repository in its previous form is, therefore, unnecessary. The preprints published to date will remain available here, but we will not add any new preprints here.

MiS Preprint
13/2013

Reinforcement Learning in Complementarity Game and Population Dynamics

Jürgen Jost and Wei Li

Abstract

We let different reinforcement learning schemes compete in a complementarity game played between members of two populations.

This leads us to a new version of Roth-Erev (NRE) reinforcement learning. In this NRE scheme, the probability of choosing a certain action $k$ for any player $n$ at time $t$ is proportional to the accumulated rescaled reward by playing $k$ during the time steps prior to $t$. The formula of the rescaled reward is a power law of payoffs, with the optimal value of the power exponent being 1.5. NRE reinforcement learning outperforms the original Roth-Erev-, Bush-Mosteller-, and SoftMax reinforcement learning when all of them choose optimal parameters. NRE reinforcement learning also performs better than most evolutionary strategies, except for the simplest ones which have the advantage of most quickly converging to some favorable fixed point.

Received:
Jan 29, 2013
Published:
Jan 31, 2013
PACS:
02.50.Le, 89.75.Fb, 89.90+n
Keywords:
Reinforcement Learning, population dynamics

Related publications

inJournal
2014 Repository Open Access
Jürgen Jost and Wei Li

Reinforcement learning in complementarity game and population dynamics

In: Physical review / E, 89 (2014) 2, p. 022113