Reinforcement-based vs. Belief-based learning models in experimental asymmetric-information games

Research output: Contribution to journalArticleResearchpeer-review

77 Citations (Scopus)


This paper examines the abilities of learning models to describe subject behavior in experiments. A new experiment involving multistage asymmetric-information games is conducted, and the experimental data are compared with the predictions of Nash equilibrium and two types of learning model: a reinforcement-based model similar to that used by Roth and Erev (1995), and belief-based models similar to the "cautious fictitious play" of Fudenberg and Levine (1995, 1998). These models make predictions that are qualitatively similar - cycling around the Nash equilibrium that is much more apparent than movement toward it. While subject behavior is not adequately described by Nash equilibrium, it is consistent with the qualitative predictions of the learning models. We examine several criteria for quantitatively comparing the predictions of alternative models. According to almost all of these criteria, both types of learning model outperform Nash equilibrium. According to some criteria, the reinforcement-based model performs better than any version of the belief-based model; according to others, there exist versions of the belief-based model that outperform the reinforcement-based model. The abilities of these models are further tested with respect to the results of other published experiments. The relative performance of the two learning models depends on the experiment, and varies according to which criterion of success is used. Again, both models perform better than equilibrium in most cases.

Original languageEnglish
Pages (from-to)605-641
Number of pages37
Issue number3
Publication statusPublished - 1 Jan 2000


  • Asymmetric information
  • Calibration
  • Equilibrium
  • Learning
  • Model comparison
  • Zero-sum game

Cite this