Friday, May 18, 2012

Game Theory

This post will probably contain no original ideas, though I am rephrasing some ideas I'm coming across.

Van Damme (1989)

There are a continuum of Nash equilibria of this game in which player 1 plays T, in all of which player 2 plays C with positive probability; there are no equilibria in which player 1 plays D. With common knowledge of (near) rationality, then, it would seem that player 1 should never be expected to play D; if the possibility of D is excluded, then C is weakly dominated. Van Damme thus argues that (U,L) is the only "reasonable" equilibrium.

It occurs to me that the other example I wanted to present requires extensive form, and I don't feel like drawing game trees.  I'll note that Mertens (1995) in Games and Economic Behavior 8:378–388 constructed two game trees with unique subgame perfect equilibria and the same reduced normal form such that the equilibria of the two games are different.  Elmes and Reny had a 1994 JET paper improving on a classic result that claims to prove that any two extensive form games with the same reduced normal form can be converted to each other by means of three "inessential" transformations that should leave the strategic considerations unaffected.  I have an inclination at some point to reconcile these results for myself.

While I'm here, I will write down one more normal form, not entirely unrelated to the one above:
When deciding between L and C, player 2 can decide as though player 1 were never playing D; if player 1 does play D, the choice between L and C doesn't matter. Similarly, player 1 can choose between U and M as though player 2 is definitely playing L or C. In the game in which choices R and D are disallowed, there's obviously a single Nash equilibrium; thus we should reasonably expect that player 1 will only play U and M with equal probability, and player 2 will only play L and C with equal probability. The game then reduces to
which has a single equilibrium in undominated strategies.

Thursday, May 10, 2012

Lindahl pricing and collective action problems

A "public good" in economics is, as the definition has it, non-rival and non-excludable.  Ignore the "excludable" bit; the idea of non-rivalry is that certain economic products -- a park, perhaps, or an uncongested road -- can be useful to me even if you're using it at the same time.  Information is the only example I can think of of a very cleanly non-rival good; as is contained in the qualifier "uncongested", a lot of goods are "non-rival" in certain situations, but might take on a more rival character in others.  Still, a twenty acre park might be worth more to me than a ten acre park, even if the ten acre park isn't at all crowded, and a lot of goods are essentially non-rival a lot of the time.

The question, then, becomes how to pay for them, and how much of them to produce.  If you want to eat an apple, since that prevents anyone else from eating the apple, it's straightforward to say that whoever eats the apple pays for it, and that the number of apples produced is the number for which people are willing to pay the costs, but for non-rival goods, charging everyone the same amount is likely to be inefficient if different people have different values on the good.  What the Swedish economist Lindahl pointed out, though, is that if you know, for example, that one person values parkland three times as much as another person, and you make the first person pay three times as large a share of the cost of the park, they will agree on how large to make the park; if the relevant parkland is going to cost $5000 per acre, of which Alice pays $15 and Bob pays $5, then if 15 acres is the point at which an extra acre is worth an extra $5 to Bob, it is also the point at which it is worth an extra $15 to Alice, and that is the point where they both say, "make it this large, and no larger."  Further, if the other $4,980 per acre is being raised in the same manner, everyone else agrees; 15 acres is in fact Pareto efficient at this point.

On some level, though, we've simply moved the free-rider problem; if you directly ask your subjects how much they value extra parkland, they have an incentive to low-ball the estimate, such that they can enjoy the benefit of the new park without paying as much for it themselves.  If the municipal authority knew how much Alice and Bob valued park in the first place, they wouldn't have to ask how much park to build; perhaps it's easier to estimate a ratio, but there is still an information problem that's central to this situation.  Some clever theorists have constructed means of, essentially, asking Bob how much Alice values parkland, and asking Alice how much Bob values it; in some situations, the citizens might know more about each others' values than the city manager, but these mechanisms tend to be incomplete at best and usually fatally impractical.

Now, a Lindahl mechanism at least does create some incentive for stating a higher value for the use of the park than if we simply build it such that we can raise voluntary donations for it; if I state a higher value, it makes it more likely that other people will go along with voting to increase the size of the park.  In particular, suppose that 40% of the neighborhood has a particular fondness for parkland and has solved its own collective action problems; if the 40% announce, "we're willing to pay twice as much per person (collectively) as everyone else" before there's a vote on the size of the park, then the other 60% are more likely to support a larger park, and everyone wins.  The magic step there is that the 40% has to solve its own problem first.  I'm trying to figure out whether there's a compelling way to apply the same sort of solution to that problem: for 20% of the population to say to another 20%, "we'll publicly agree to pay 2.5 times our pro-rata share if you'll agree to pay 1.5 times your pro-rata share", and cascade down to the level of the family or tight social groups that might have internal dynamics for solving collective action problems.  It feels like there might be something useful here.