Friday, December 4, 2015

market safety

It is moderately well-known — Arrow's impossibility theorem is better known, but the Gibbard-Satterthwaite theorem is probably more apposite — that there's no ideal way to aggregate preferences into a jointly optimal outcome, so we're left making tradeoffs of different features when we design systems for coordinating group decisions, such as voting systems and market systems.  One real-world criterion that isn't even part of the impossibility results is "simplicity", partly because that can be hard to formally define; still, it is certainly the case that people process information in certain ways that work better if they find a system to be simple and intuitive than if they don't.  One of the practical consequences of this is that the revelation principle, even if useful for theoretical understanding of constraints, is in some practical sense not something that can be put into practice; the revelation principle says that the best possible aggregation system is in practice equivalent to some "strategy-proof" system, wherein each agent reports all of its private information and the mechanism is such that it is incentive-compatible for them to do so, but in practice even developing the information to report is too complex for realistic agents, and the resulting direct mechanism is often unintuitive to laymen in certain ways in order to understand the constraint.

A good example, perhaps, is the Myerson-Satterthwaite (same Satterthwaite) result for two agents trying to trade an object.  One of them owns it, and has a value of it between $10 and $20, and the other places on it a value between $10 and $20 as well.  As far as I and the buyer know, the seller's value is uniformly distributed in that range, and as far as the seller and I know, the buyer's value is uniformly distributed in that range, but the buyer and seller each know their own valuations.  How do I design an "efficient" mechanism — determining, as a function of the private values, whether and at what price the buyer buys the object?  "Efficiency" here is just measured as the private value of the agent who ends up owning the object, and I'd like to simply give it to whoever has a higher value, but because the price at which it trades would have to be a generally increasing function of the reported values, the buyer will tend to understate the value (and the seller would tend to overstate it) unless doing so substantially reduces the probability of a profitable trade.  They find a fairly generally applicable rule, even when distributions aren't uniform, and it's a bit complicated, though also elucidating; what's relevant for my purposes now, though, is that with uniform distributions it turns out to be equivalent to the nonfatuous Bayes-Nash equilibrium of the mechanism "each side states a price and, if the buyer's price is higher, trade at the midpoint."  It is not the case, in this latter mechanism, that each agent's stated price will be equal to the private value — the buyer will certainly shade low and the seller will shade high — but strategically sophisticated traders will buy in exactly the same circumstances as in the direct mechanism, and for the same prices.

Realistic agents may not be strategically sophisticated, but it's hard to tell which direction that cuts; there are human subject experiments (Kagel, Harstad, and Levin (1987): "Information Impact and Allocation Rules in Auctions with Affiliated Private Values: A Laboratory Study," Econometrica, 55(6): 1275–1304) in which subjects seem to find it harder to simply report their own value, even when they are given it, than to do the shading they're used to doing in small bilateral trade situations, and that's when they have been given their value — in the real world, agents asking themselves "how much is this worth to me?" are surely less likely to find it easy to give the right number. They aren't used to this task; they're used to (at a supermarket) deciding whether they are willing to trade at a given price or (at a bazaar, e.g. at an arts or crafts fair) to making a conservative bid.  In a lot of these situations one side or the other may gain an advantage from being better informed or more strategically sophisticated, but the gains tend to be small and not to too badly impair the interests of people who are toward the low end in information or sophistication.

Some simple mechanisms, though, do not have this property.  I've noted that my biggest problem with the Borda count is not that the best strategy isn't to list candidates in order of preference — just as I don't think you're "lying" if you offer to pay $10 for an item for which you would willingly pay $20 — but that even if all of the agents in a Borda count vote are unrealistically well-informed about strategic information, near equilibrium, if one candidate's voters are somewhat more informed than others, that candidate will generally win — essentially without regard to the candidate's popularity.  Systems like approval voting might require some strategic awareness, but once most agents are somewhat aware of what other agents' preferences are, being a lot more knowledgeable than the others only helps under exceptional circumstances.  Often it is, in fact, reasonable to expect the agents generally to be somewhat more aware of each others' preferences than the mechanism designer is, or can reasonably take into account; for example, if there are three candidates, one of whom is the last choice of 90% of voters, the Condorcet winner is likely to win a first-past-the-post vote, while an informed mechanism designer might find it awkward to publicly and formally declare the irrelevant candidate to be irrelevant.  This is a situation in which the mechanism works, and does so in part by letting voters use strategic information that the designer cannot use in a more direct fashion.

What triggered this post, though, was the concept of "spoofing" in the financial markets, and whether or not spoofing is bad. My first visceral response is that, if some agents are making inferences from the public bids and offers of other agents, it's on them if the information content of those bids and offers is other than what they think it is — even if it's other than what they think it is by the design of the people placing those bids and offers.  Let the market seek its strategic equilibrium.  With markets, perhaps the best analysis is to figure out whether this impedes the functions of the market — moving risky assets to their highest-value owners, with information discovery as part of the process of doing that — and that end may well be better served by a rule against spoofing that is nebulous around the edges but, in practice, is often not that hard to discern.  One other criterion to consider, though, is whether the strategic equilibrium that the market would find in the absence of such a rule is one in which agents would find it profitable to devote a lot of resources to gaining strategic information (as opposed to fundamental information), which, in the voting context, I consider to be one of the very most important considerations in evaluating a system.

No comments: