It can be smart to be stupid

  • David H. Wolpert (NASA Ames Research Center, USA)
A3 01 (Sophus-Lie room)


An important problem in game theory is how to explain bounded rationality in general, and non-kin altruism in particular. Previous explanations have involved computational limitations on the players, repeated plays of the same game among the players, signaling among the players, networks of which players play with one another, etc. As an alternative I show how a simple extension to any conventional non-repeated game can make bounded rationality and/or non-kin altruism be utility-maximizing in that game, even for a computationally unlimited player.

Say we have a game gamma with utility functions {u_j}. Before playing gamma have the players play a "persona" game Gamma. Intuitively, in Gamma the move of each player i is a choice of a utility function u'_i to replace u_i when she plays gamma. The objective of player i in Gamma is to pick such a "persona" u'_i so that when gamma is played with the original utility functions replaced by the personas chosen by all the players in Gamma, the resultant Nash equilibrium maximizes expected u_i.

In certain cases, such an optimal u'_i differs from u_i. In these cases, player i's adopting the "bounded rationality" and/or "altruism" of persona u'_i actually maximizes expected u_i. As particular illustrations, we show how such persona games can explain some experimental observations concerning the prisoner's dilemma, the ultimatum game, and the traveler's dilemma game. We also show how phase transitions arise in some persona games, and discuss the possible implications of persona games for evolutionary biology, for the concept of social intelligence, and for distributed control of systems of systems.