Monday, January 6, 2014

ASSA conference

The big annual convention of the Allied Social Sciences Associations (I think) took place this past weekend; I attended sessions on Friday and Sunday (spending Saturday with my family instead), and want to record some somewhat random thoughts I had before I lose them elsewhere.
  • One of the sessions I attended yesterday was on "robustness", in which people attempted to say as much as they could about a particular theoretical model while leaving some important part of the model unspecified; often this took the form of bounds, i.e. "without knowing anything about [X], we can still determine that [Y] will be at least [Y0]". In one paper, the author supposed that two agents in the model have the same "information structure", but that the observer (the "we" in the above quotation) doesn't know what it is. The "worst" possible information structure looks a lot like the "worst" possible correlated equilibrium in a related game, a point he never made, and that I still haven't made precise enough to be worth noting to him. I'm not particularly enamored of his paper, but I do like correlated equilibrium, so I might come back and try to figure out what I mean.
  • The godfather of "robustness" is Stephen Morris, who has spent a lot of the last two decades thinking about higher order beliefs (what do I think you think Alice thinks Bob thinks?) and the past decade thinking about what properties of strategic situations are, to varying degrees, insensitive to them, especially in the context of mechanism design. A lot of the 1980's style mechanism design and auction theory says things like "if buyers have valuations that follow probability distribution F, here's the auction that will, in terms of expected value, generate the most revenue". If you don't know F, though, you're kind of lost. So to some extent much of the project is about making things independent of F (and other assumptions, some of them tacit). On a seemingly unrelated note, my brother and I have frequently discussed the idea that some "behavioral" phenomena — people behaving in ways that have internal inconsistencies, or seem clearly in a certain decision problem/strategic situation to be "mistakes" — may result from people's having developed heuristics for doing well in certain common, realistic situations, and carrying them over naively to other scenarios (especially artificial ones in experiments) where they don't work nearly as well. During the conference it occurred to me that this is similar to using a mechanism that has been designed for a particular F. It is also, to some extent, related to the "machine learning" concept of "overfitting" — people adapt so well to some set of experiences that they are overly specialized, and do poorly in more general situations — where "robustness" is related to "regularization", which amounts to restricting yourself to a set of models that is less sensitive to which subset of your data you use, and is hopefully therefore more applicable to data you haven't seen yet.
  • The last set of ideas, and that most closely related to my current main project, is related to a "backward induction" problem I'm having. Traditional game theory solution concepts involve "common knowledge of rationality" — defining recursively, all players are "rational", and all players know that there is common knowledge of rationality. In particular, if everyone knows that a certain action on your part can't be profitable to you, then everyone can analyze the strategic situation with the assurance that you won't take that action. If some action on my part would only be good for me if you do take that action, then everyone can rule out the possibility that I would do that — I won't do it, because I know that you won't make it a good idea. Where this becomes "backward induction" is usually in a multi-stage game; if Alice has to decide on an action, and then I respond to her action, and then you respond to my action, she figures out what you're going to do — in particular, ruling out something that could never be good for you — and, supposing I will do the same analysis, figures out what I am going to do. This is normally the way people should behave in dynamic strategic situations. It turns out that people are terrible at it.
    In my current project, the behavior I'm hoping/expecting to elicit in the lab is ruled out in more or less this way; it's a multi-period situation in which everyone is provided enough information to be able to rule out the possibility that [A] could happen for all of the next 20 periods, and if everyone knows (for sure) that [A] won't happen in the next period, then it's fairly easy to figure out that they shouldn't act in a way to cause [A] to happen in this period. Perfectly rational agents should be able to work their way back, and [A] should never happen. I think it will. I want to be able to formalize it, though. So I'm trying to think about higher-order beliefs, and how one might describe the situation in which [A] happens.
    • One threat to the idea of backward induction is that it requires "common knowledge of rationality", even where "common knowledge of rationality" has been refuted by observed evidence. Suppose you and I are engaged in a repeated interaction with a single "rational" pattern of behavior — you know I will always choose B instead of A because we are both presumed to be "rational" and B is the only choice consistent with backward induction. Typically this last clause means that, if I were to choose A, I know that you would respond in a particular way because you're rational, and because you know how I would respond to that since I'm rational, and that whole chain of events would be worse for me than if I choose B. Having completed the analysis, we decide that if everyone is rational (and knows that everyone else is, etc.), I should choose A. But then, if I choose A, you should presumably infer that I'm not rational — or that I'm not sure you're rational, or not sure you're sure I'm rational, or something. But this seems to blow a hole in the entire foregoing strategic analysis.Now, if this is really the only problem with backward induction, then if nobody acts more than once, we could still get backward induction; if you do the wrong thing, well, that's weird, but maybe it's still reasonable for me to expect everyone whose actions I still have to worry about to be rational. Or maybe it isn't; in any case, I doubt human subjects in such a situation would reliably perform backward induction. It might be interesting to check some day, though, if it happens to fit conveniently into something else.
    • While I'm thinking in terms of which systems of beliefs I think are "reasonable", I should probably look at self-confirming equilibrium; this is the idea that "beliefs" can only evolve in a certain way along a given "path", which would at least constrain how an initial set of beliefs would affect behavior.
    • That might be more compelling if I try to think about normal form. This is kind of an old idea of mine that I've not pursued much, in part because it's not very interesting with rational agents using Bayesian updating, but there was a remark at one of the sessions yesterday that the difference between a "dynamic game" and a "normal-form game" is that in the former one can learn along the way. If you "normalize" the dynamic game and "learn" by Bayesian updating, it turns out that, well, no, there really isn't a difference; if you start with a given set of beliefs in either the dynamic game or its normal form and work out the normal form proper* equilibria or the sequential equilibria of the dynamic game, they're the same. If learning isn't Bayesian, though, then, depending on what it is, different "dynamic" games with the same normal form might result in different outcomes. How this looks in normal form might be interesting, and might be expressible in terms of restrictions on higher order beliefs.



* I think. To get the backward induction right, you need some refinement of Nash equilibrium that at least eliminates equilibria with (weakly) dominated strategies, and I think Myerson's "proper" equilibrium is at least close to the right concept. Actually, there's a paper by Mertens from 1995 that I should probably think about harder. In any case, I think there's a straightforward normal-form equilibrium concept that will encapsulate any "learning" that goes on, and that's really my point.

No comments: