While there is, in fact, at least some consistency to the way in which people deviate from traditional economic models, I think exactly this potential variety in ways to deviate — even ways to deviate just slightly — may well be important to understanding aggregate behavior. Since at least Kreps et al, (1982)[1], it has been reasonably well-known that a population of rational agents with a very small number of irrational[2] agents embedded in it may behave very differently from a population of which all members are commonly known to be rational. In that paper the rational agents knew exactly how many people were irrational and in what way — they had, in fact, complete behavioral models of everyone, with the small caveat that they didn't know which handful of their compatriots followed the "irrational" model and which followed the "rational" model. In the real world, we have a certain sense that people generally like some things and behave in certain ways and dislike other things and don't behave in other ways, but there is far less information than even in the Kreps paper.
It is interesting, then, that in a lot of lab experiments, the deviations from the predictions of game theory take place "off-path" — that is, a lot of the deviations involve subjects responding rationally to threats that don't make sense. Perhaps the simplest example is the "ultimatum game"; two subjects are offered (say) $20, with the first subject to propose a division ("I get $15, you get $5"), and the second subject to accept or refuse the split — with both subjects walking away empty-handed if the split is refused. This is done in an anonymous way, as essentially a single, isolated interaction; gaining a reputation, in particular, is not a potential reason to refuse the offer. Different experiments in different contexts find somewhat different results, but typically the first subject proposes to keep 3/5 to 2/3 of the pot, and the responder accepts the offer at least 80% of the time. It is certainly the case that respondents will refuse offers of positive amounts of money, especially if the offer is much less than one third of the pot, but the deviation from the game-theoretic equilibrium that is most often observed is that the offeror offers "too much", in response to the (irrational) threat that the respondent will walk away from the money that is offered. This does not require that they be generous or have strong feelings about doing the "right" thing, or that they hold a universally-applicable theory of spite, only (if they are themselves "rational") that they believe that some appreciable fraction of the population is much likelier to reject a smaller (but positive) offer than a larger offer.
Game theoretic agents typically have fairly concrete beliefs about the other agents' goals, and from that typically formulate fairly concrete beliefs about other agents' actions. There may be very stylized situations in which people do that, but I think people typically use heuristics to make decisions, in somewhat more reflective moments make decisions after using heuristics to guess what other agents are likely to do, and only very occasionally circumscribe those heuristics based on somewhat well-formulated ideas of what the other agents actually know or think. The reason people don't generally formulate hierarchies of beliefs is that they aren't useful; a detailed model of what somebody else thinks yet another person believes another person wants is going to be wrong, and is not terribly robust. The heuristics are less fragile, even if they don't always give the right answer either, and provide a generally simpler and more successful set of tools with which to live your life.
[1] Kreps et al, (1982): "Rational Cooperation in the Finitely Repeated Prisoners' Dilemma," Journal of Economic Theory, 27: 245--252
[2] I feel the need to add here that even with their "irrational" agents, it is possible (easy, in fact) to write down a utility function that these agents are maximizing — that is, they are "rational" in the broadest sense in which economists use the term. Economists often do, sometimes in spite of protestations to the contrary, suppose not only that agents are expected utility maximizers but that they dislike "work", they don't intrinsically value their self-identity or their identity to others (though they will value the latter for instrumental reasons), etc. Often, without these further stipulations, mainstream economic models don't give "precise" predictions in the sense I asserted at the beginning of this post; in the broad sense of the term "rational", there may be a smaller set of ways to be rational than to be irrational, but there are a lot of ways to be rational as well, and restricting this set can be important if you want your model to be useful. For this post I mostly mean "rational" in the more narrow sense, but it should be noted that challenges to this sense of "rational" are much less of a threat to large swaths of economic epistemology than the systematic violations of expected utility maximization are.
No comments:
Post a Comment