Tuesday, November 5, 2013

personality types and household income

First, this post is an inexcusable waste of my time and probably a waste of yours, even more than is usual for this blog.  That said, I found this post on the household incomes of Myers-Briggs "personality types" interesting, but wanted to present the results differently.

Here's a linear regression that uses 5 parameters (6 including an intercept) to capture more than 5/6 of the variance while using up less than half of the degrees of freedom. This should be viewed as a descriptive dimensional reduction, i.e. an attempt to describe the data well with reasonably few parameters; when I'm doing something like inference or forecasting I have a stronger prejudice towards using fewer parameters and preserving more degrees of freedom (or at least regulating the fit somehow), but this is sufficiently pseudoscientific that I feel I might as well include the interaction terms that seem to pull their weight:

intercept 63437.5  1489.285 42.60
E          1375.0  2183.031  0.63
F         -1875.0  2183.031 -0.86
J          6100.0  1815.196  3.36
EJ         7450.0  2944.699  2.53
FJ        -5450.0  2944.699 -1.85

To try to get useful data reduction, I have refused to use any cubic (or higher) interaction term; including the first- and second-order terms that I've left out only explains 20% of the variance that is left in this fit, so I feel the clarity of parsimony here is more valuable than a slightly better fit. The first thing to note is that I've dropped the S/N axis entirely; it doesn't do much. Also, the P types have very little variance (a standard deviation of $2700 versus $6100 for the J types); they're largely in the low sixties.

The J types are a bit more interesting. Their mean is very close to $70,000, but EJs make about $9000 a year more than IJs, and TJs make about $7000 more than FJs. The IFJs are right in the middle of the P's; it's the other J types that do well.

The linked article suggests some problems with this; some of the things it raises as problems don't really bother me, but it doesn't mention that these are household incomes, which means that you're conflating income effects, family size effects, and effects from affinities of people from personality types for spouses of other personality types.

Monday, October 21, 2013

market models

In freshman-level economics, we tend to talk about "markets" as though all buyers and sellers costlessly come together and become aware of the price at which they can buy and sell, often with that price simply magically obeying the rule that the amount buyers wish to buy at that price will coincide with the amount that sellers wish to sell.  Even when we start talking about models of oligopoly, even at the upper-level undergraduate level, we tend to think about markets where there is a large set of buyers who observe the same prices[1] and more or less simply choose the cheapest product available.  The two dominant models in much of economics seem to be something like a Walrasian auction or a continuous-double-auction with competition on both sides, and something in which sellers post prices publicly and buyers take them as given and respond.

In some countries there are large "bazaar" economies with a lot of individual haggling over prices, but they often seem foreign to Americans; there isn't a lot of "bilateral monopoly" haggling in the US aside from the market for houses and cars.  These are large purchases of goods in which there is a certain amount of differentiation.  Also, that is really only applicable to consumer purchase of goods. A lot of the market for intermediate goods involves procurement divisions of large companies putting out solicitations for bids, or at least trying to negotiate down the per-unit cost of large purchases from vendors.

I think perhaps this hasn't received as much attention from economists; at least not from the ones I've read. My simple go-to mental model of firms involves their trying to generate enough gross profit (i.e. revenues minus the direct cost of selling an item) to cover their fixed costs (basically any cost that isn't "direct"). In a context of two firms negotiating over what amounts to gains-from-trade, the firms are ultimately bargaining over which of them gets to apply what portion of the gains to its own overhead costs and/or final profit.

This post, more even than most of the things I post here, is mostly as a reminder for me to come back to if I find myself casting about for a research idea someday.  I think there's something in here, but I'm not sure what.

[1] There is some discussion of "price discrimination", but even that is generally done on a large, statistical basis, with buyers still acting as "price-takers".

Friday, September 20, 2013

voting with markets

I noted before that one of the problems with some voting systems is their sensitivity to beliefs; in particular, if Borda count is used to elect a single winner, most equilibria have almost all candidates receiving the same number of votes, and if some block of voters believes it knows which candidate is likely to have slightly more votes than others, the vote results will swing dramatically.

Plurality voting, in fact, will tend to have multiple equilibria — if you're not used to game theory, it's worth remembering that "equilibrium" in this context essentially means "self-fulfilling beliefs". In this context, if everyone believes a candidate has no chance of winning, nobody will "waste their vote" on that candidate, and so the candidate has no chance of winning, just as was predicted. In the case of approval voting, giving that candidate a vote, in addition to whichever other candidate(s) might have received your vote, costs you nothing, and, if the candidate is in fact fairly popular, the "equilibrium" in which that candidate was not viable is destroyed; it is no longer self-fulfilling.

The fragility problem, then, is whether almost equilibrium beliefs result in almost equilibrium behavior. Approval voting (generally) has equilibria that are fairly stable, to the point where I think it's frequently reasonable to imagine that voters would know what the final vote totals will look like with enough precision to behave in a way that produces the nearby (and typically in some sense optimal) equilibrium.  I have previously suggested that, in other contexts, that information can be made public through a continuous voting mechanism, provided it is feasible to allow people to vote and then later change those votes.

In some contexts it may not be feasible, and I propose the following mechanism, making explicit (as I often don't, but should always be implicit) that this blog is a brain-stormy sort of place, and I'm not going to apologize for a refusal to defend this proposal in some other venue in the future.[1] So long as some reasonably sized set of agents is able to participate on a continuous basis ahead of time, it only requires that all of the voters participate once, at the end:
  1. Create a "prediction market" in which participants trade contracts, one for each candidate, that provide a fixed payout after the election if that candidate finishes first or second in vote totals.
  2. Some reasonably manipulation-proof means is used to construct a final indicative price pi for each candidate very shortly before the vote is taken.
  3. Each voter casts a ballot consisting of one real number ui for each candidate.
  4. wi=pi(ui-∑j (pj/2) uj) and then vi=wi/√∑jwj2 are calculated for each voter.
  5. Add up the number of votes vi each candidate gets from each voter.
  6. Pay out the contracts, coronate the winner.
One probably wants to put a floor on pi — indeed, you might simply have a market rule that doesn't allow orders that would sell below a certain floor —but the general idea here is that the market figures out who the viable candidates are, and the voters vote, but the voters may cast a lot of votes for (or against) the relatively non-viable candidates without losing many votes to cast for (or against) the viable ones.[2]

It's possible to recast this somewhat by effectively asking the voters for ai with a constraint on ∑i (aipi)2 and then adding up ai/pi, which is essentially vi if voters are optimizing in the way that seems right to me and believe that the pi are their best bet as to the probabilities of "viability"; this presentation has the advantage that the description I gave above is perhaps intuitively clearer, where you can see that voting for or against a candidate with a small value of pi is "cheaper" than for a candidate with a large pi. I prefer it the way I've given it in the numbered list, though, for the reason that I would prefer that the pi encapsulate the necessary strategic information but that voters never even have to take account of it. As I've presented it, provided that a voter's best assessment of the viabilities is the same as that reflected in the market prices, and provided there are enough voters that one voter (or perhaps a small conspiracy of voters) won't appreciably affect the "correct" values of pi, the voter's best strategy doesn't depend on knowing the values of pi; it only depends on the voter's preferences.

[1] As is often the case on this blog, one of the advantages of this proposal is that its drawbacks are fairly obvious; it fixes drawbacks that are important but hidden in other mechanisms. One point perhaps noting is that naive fixes can destroy important features of the solution as presented; if you find a context in which the relevant interests will allow data from a prediction market to be an input into the voting system, and you're worried about manipulation, I would particularly note that simply saying that an election will be re-run if a "nonviable" candidate wins is as likely to introduce bad behavior (by people who want to redo the election) as it is to eliminate it.
[2] Putting too high a floor on pi would make it more costly to vote for or against such a candidate, and could vitiate this feature. The result that I would like to avoid is that one of the "viable" candidates gets a lot of negative votes, most of your "nonviable" candidates get nearly zero votes, and thus your "nonviable" candidates beat the "viable" ones; in equilibrium, presumably the market participants would take account of this possibility, and would start to bid down all but perhaps one candidate — the presumptive winner — and bid up everyone else, in which case we're back in the situation where voters suffer from a great deal of strategic uncertainty.

Wednesday, September 11, 2013

more on voting

There's not, I think, a big new idea here, but a somewhat different presentation of thoughts I've had on voting systems, culminating in a point that I perhaps haven't made in this form before.

Game theory in particular has a long history of focusing on "equilibria" — that is, if everyone has correct beliefs about what everyone else is going to do, everyone will act so as to make the equilibrium come about. "Equilibrium" is essentially a self-fulfilling prophecy. In the context of plurality voting, if groups of voters have preferences over 3 candidates of A>B>C, C>B>A, B>A>C, and B>C>A, with the latter two groups somewhat smaller than the former two groups, then B is the "Condorcet" winner, but if B is expected to lose, B will receive no votes, as a result of which B will lose.  This is the classic failure of plurality voting.

Now, a lot of discussion of voting systems does not involve equilibrium, and the most common game theoretic solution concept — Nash equilibrium — supposes that every voter knows literally the final vote count before it happens, which seems a bit excessive. Much discussion of voting systems supposes almost the opposite — that voters aren't strategic at all, or if they do respond to beliefs about others' likely behavior, do so rather crudely. In particular, some voting systems (plurality among them) are criticized for being subject to "manipulation", which means that, rather than vote for their favorite candidate, voters have an incentive to vote for someone else — perhaps someone with a better chance of winning. A related property of these systems, and a different form for what on some level is the same complaint, is that if voters don't vote strategically, then the outcome doesn't reflect the voters' will.

The inclination of the game theorist is not to care about this. If we've created a system in which you may vote for one of any five candidates, and the winner will be the one with the most votes, a voter whose favorite candidate is Williams may decide that his interests are best served by voting for Johnson. To call the vote for Johnson "strategic" is ultimately to put a layer of interpretation on the action that is not implied by the rules of the voting system itself; a vote for Johnson does not, as we've just established, mean that Johnson is the voter's preferred candidate; it implies that the voter's preferences and beliefs about other's votes are in some set that may have less intuitive appeal. It may well be that this information is more valuable than simply who the voter's favorite candidate is, however, in terms of deciding who should be the winner, and indeed seems more of an indictment of the interpretation of the vote than of the vote or the voting system.  It is worth noting, in fact, that in many contexts it is likely that, in each election, most voters will have much more information about other voters' preferences than whatever committee is initially establishing the voting system does at the time the system is being established; a good system might in fact try to leverage voters' information about each other's preferences to help suss out the most preferred candidate.

Now, strategic voting can become a bigger concern if the outcome of the election will turn out to hinge sensitively on how much strategic information each voter has, and how hard it is to get that.  In particular, if voters find themselves doing more research on which candidates are likely to get votes than on which candidates would make better choices of winner, a lot of effort is being directed in socially useless directions.  Especially in some of the contexts I've suggested — namely where repeated, perhaps almost continuous voting (with totals announced) is practical — the relevant kinds of information may not be hard to gather, in which case the strategic element is fairly costless.  In this context Nash equilibrium and its close relatives becomes a more plausible equilibrium concept.

And here is ultimately where I'm going with this: the real attraction of approval voting over plurality voting is that it tends to have fewer "bad" equilibria.  Generally the equilibrium that is likely to obtain from approval voting will also be an equilibrium of plurality voting; the real benefit of the former over the latter is that approval voting is less likely to have other equilibria, and when plurality voting has other equilibria, they will tend to be "bad" in a fairly objective sense.  Some proponents of approval voting will note that, in the preferred equilibrium, voters get to better "express" their preferences, voting for candidates who aren't really in the running; that doesn't really interest me except as it's related to the forces that tend to destabilize the bad equilibria.

This property is actually fairly robust, as it's given; in certain contexts, just about any voting system in which the number of different kinds of vote a voter can cast grows "quickly" with the number of candidates will have the same, small set of equilibria, with no bad equilibria, at least in a generic setting.  With plurality voting, when there are n candidates, each voter has only n+1 choices of ballot (i.e. vote for any of the candidates or none of them).  Both approval voting and borda count allow voters to be much more expressive, and therefore eliminate the bad equilibria.

The problem, finally, that I want to note with Borda count is that it tends to produce equilibria that are too fragile; if voters' beliefs are almost but not quite correct, the outcome of the election can depend very sensitively on what those beliefs are.  If there are 5 candidates, it should be reasonably clear that if three of the candidates are essentially out of the running, voters' incentives are to vote their preferred (of the other two) first and the other one last, with the three irrelevant candidates in the middle.  This itself will tend not to be equilibrium behavior; either one of the two "major" candidates is now finishing well behind one of the "irrelevant" candidates, or they are all very close to each other, and thus the irrelevant candidates become relevant.  The actual equilibrium will result in a very close race among many candidates, with the actual winner decided by few enough votes that voters who can best figure out the final outcome will garner outsized influence.  I don't care that, in equilibrium, people are casting a last place vote for a candidate they might deem more qualified than other candidates; that's simply part of the mechanism.  It does concern me, though, that the strategizing is so sensitive to information other than who is the best candidate.

Thursday, May 2, 2013

non-monetary transactions

My wife is currently looking for a job; inter alia, this involves having "references", viz. people willing to discuss her with potential employers. This takes time (if nothing else) from the people serving as references, and does not directly benefit them; this is the sort of situation in which some sort of compensation is typically paid from the beneficiary to the benefactor, but if that is done in the case of job references, the payment is non-pecuniary; it follows more of a reciprocal-favors paradigm.

One possible reason for this that just occurred to me this afternoon is that perhaps it is easier to agree on terms-of-trade in a reciprocal-favors paradigm than for the usual unit of account. In bilateral monopoly situations, the possibility of costly bargaining creates an incentive to find default terms on which to focus; for many favors, it might be difficult to figure out a default dollar-price, and therefore costly to bargain to an agreed-upon dollar price for the favor, whereas different favors may seem apparently comparable; even if it's not clear, in dollars, how much each should be worth, it seems that the amount should be about the same. The expense of bargaining is therefore considerably reduced by (implicitly) agreeing on exchange of favors than trying to figure out how much money (or, for that matter, how many goats) a favor is worth.

Monday, March 25, 2013

fixed exchange rate regimes

I've been taking a history class that has involved reading about the classical gold standard and the various gold exchange standards that prevailed in the western world for about a century.  They all involve the idea that a country should peg the value of its currency to something else by promising to buy or sell that something else for that currency, possibly subject to additional stipulations.  These days, there are many currencies that are still pegged to other currencies, but I'm not aware of any currency that is actually pegged to something else; the relative prices of various currencies are regulated, but there's nothing external to the system of international currencies to which they are tied either directly or indirectly (for an appropriate narrow conception of "indirectly tied" here).

Even the most comprehensive guarantee of convertibility always included some stipulations; at the very least, to get the US Treasury to buy or sell gold for dollars, you had to get the dollars or the gold to the right location.  This meant that at most times and places there could be some deviation from the official rate.  At second order, while, under the gold standard, the exchange rate between dollars and British pounds was fixed (since you could use your dollars to buy gold which you could sell for pounds, or vice versa), there were always short-term deviations that were too small or too transient to pay for the cost of shipping gold across the Atlantic ocean.  While the rate couldn't move too far from "mint parity", dollars and pounds weren't perfect substitutes; they were, however, much closer than most pairs of commodities studied in economics.

This relied, of course, on the credibility of the gold standard; when countries' gold stocks started to run low, the currency tended to depreciate a bit, in fear that the country, even if it wanted to, would no longer be able to sell gold at the official price, as it would run out of gold to sell.

A few years back there was a penny shortage — businesses were having trouble making sure that they always had enough pennies to make change for customers.  The classical economic solution to a shortage is an increase in price; in principle, one might have expected at least some businesses to offer 20 cents' credit for 19 pennies, for example.  I'm not aware of that having happened; they seem to have limited themselves to persuasion.  Even more than in the case of other goods, people have a strong sense of a "just price" for a penny, I think, and resist its being floated.  In a perfect market with continuously diminishing marginal utility, the relative marginal value different people attribute to different goods should be the same, so long as each has some that they could trade; if they didn't, the relative market price of the goods would allow at least one of the two people to trade the relatively expensive object for more of the relatively cheap object to get more of what they value than they give up.  It very much seems in this case as though the businesses' relative values for pennies exceeded their market prices, but, with fixed prices, they ran into quantity constraints.  If the businesses could have purchased 50 pennies for 50 cents from banks, of course, they would have done so in such circumstances; however, banks, too, including federal reserve banks, were running into quantity constraints as well.  The federal reserve system was, for a time, unable to defend the mint parity of the penny; I imagine that the expectation that they would ultimately be able to do so is the only reason the price stayed approximately at mint parity, and custom took over from there to keep it exact.

This idea of different denominations of physical currency representing different mediums of exchange that are held in a fixed relative exchange rate presumes that each penny (for example) is itself a perfect substitute for any other penny.  There have been historical cases in which different coins of nominally the same denomination were not treated as such, and indeed it was my very recent experience with this phenomenon that prompted this post.  Yesterday, in the Knoxville airport, I was given change that included a $5 bill that was stapled together; today I used that to pay for lunch.  The guy who received it expressed aloud his displeasure with it; I myself wasn't particularly pleased to receive it, but didn't try to do anything about beyond pass it along in my next cash purchase.  The US Treasury regards it as identical to any other $5 bill, but I imagine it will change hands rather more rapidly than the average $5 bill does, at least until it goes back to the US Treasury's shredders.

Wednesday, February 20, 2013

non-Smith winner of approval voting equilibrium

Suppose I have 3 voters voting for 3 candidate outcomes.  Their preferences are A>C>B, B>C>A, and, since C stands for Condorcet, voter 3's first choice is C.  If preferences are strict and common knowledge, a voting scheme in which voters vote first between C and not C and then, in the event of not C, vote between A and B, will result in C's winning the first round, regardless of how you fill out 3's preferences.  If voter 3 is exactly indifferent between the bottom two choices, and 1 and 2 are approximately (in a von Neumann-Morgenstern sense) indifferent between their bottom two choices, then "not C" wins the first round; each of the first two voters is willing to take the chance that 3's coin comes up their way.  (They are hoping for opposite outcomes, obviously).

Now, in a post about half a year ago, I conjectured that, when there is a Condorcet winner, that candidate will (almost always) win in any approval voting equilibrium in the environment in which I generally think about voting systems: a lot of voters, maximizing expected utility, who know enough about everyone else to predict the vote outcomes with high relative precision but low absolute precision (i.e. if a candidate is getting about 1,000,000 votes, voters' best guesses will be off by more than 10 votes but less than 10,000 votes). The first paragraph scales up to a counter-example to that conjecture; if you have 1,000,000 voters, with about one third of them of each of the preferences given, and each voter believes that A and B have equal chances of winning but that C has a negligible chance of winning, then A and B each get (about) 500,000 votes and C gets 333,000.  There's another equilibrium in which the three practically tie, and each wins with a 1/3 probability.[1]  There's also an equilibrium that lies "in between" them, where C has just enough chance of winning to induce most C voters not to vote for their second choice, but a low enough chance that a few of them will vote for A and B, which thereby get slightly more votes than C; this equilibrium is unstable in the sense that a slightly different belief would lead to one of the other equilibrium outcomes instead. Both of the other equilibria are robust in this sense.

From a normative standpoint, I have to say, I don't think these are bad equilibria, especially ex ante.  I probably want to think more about this in light of some of the nice properties of Condorcet winners, many of which properties are described in terms that suggest that preference order is the only thing about preference that matters[2] — which suggests, perhaps, that the Condorcet solution concept is best ex post, but that something like a continuous approval vote might give better results ex ante in some contexts.

[1] In both cases the marginal probability of winning is practically equal to the probability conditional on there being a close race, which is what really matters, since they're both basically tied. I use the term "probability" here for conciseness.
[2] Note that the von Neumann-Morganstern utilities in the example given are essential; if the voters prefer their second choices to a coin toss between the other two — say, for definiteness, that half of the C voters prefer A to B and the other half prefer B to A — the construction in the first paragraph fails, and I'm pretty sure that now the only equilibrium is in fact the one in which every voter votes for two choices, giving C 1,000,000 votes and A and B 500,000 each.

Friday, February 1, 2013

random allocation mechanisms

I've been thinking again about a classic allocation problem: we have n agents, each of which has preferences over n items, and we wish to assign one of the items to each agent.  An allocation is ex post efficient if and only if it can result from random serial dictatorship, which is probably the way you've solved any such problem you've encountered in real life: the people draw straws (at some level of metaphorical remove), and one person gets his first choice, the next gets his first choice among those remaining, etc.   It is fairly obvious that this will result in an allocation that is Pareto efficient (as long as agents have strict preferences), i.e. that there is no trade available after the fact that would make both parties to the trade better off (whichever agent went before the other would not want to trade); it is only slightly less involved to show that any Pareto efficient outcome can be generated by giving the agents the opportunity to pick in some order.  (Lemma: at least one agent will get its first choice, or else a mutually beneficial trade would in fact exist.  Make that person the first one to choose, then consider the remaining assignment problem with n-1 agents and n-1 items.  The result follows from induction.)

What's left to say?  Well, it turns out that there are situations in which random serial dictatorship is not "ex ante" efficient: once the mechanism has run, there is always one person who would object if you said, "hey, let's do this assignment differently," but if the people are mathematically adept and know each others' preferences, there are situations in which all agents would prefer some other mechanism from the get-go.  From Bogomolnaia and Moulin's 2001 paper in JET, with items a, b, c, and d, suppose two agents prefer a>b>c>d and the other two agents prefer b>a>d>c.  If the first two agents get a and c (they toss a coin to decide who gets which), and the other two get b and d, then each agent has a 1/2 chance of getting its top choice and will always get at least its third choice, while random serial dictatorship gives each agent only a 5/12 chance of getting its top choice and 11/12 chance of getting at least its third choice (i.e. a 1/12 chance of getting its last choice).  An AER paper from 2011 (I think) offers an even simpler example if one is willing to use von-Neumann–Morgenstern utilities instead of the (stronger) concept of first order stochastic dominance: given 3 items, if three agents assign u(A)=3 and u(B)=0 but two of them assign u(C)=1 and one assigns u(C)=2, random serial dictatorship gives each agent each item with a probability of 1/3, but all agents get a higher ex ante expected utility from giving item C to the third agent and letting the other two toss a coin for A and B.

Generically, suppose I have expected utility maximizing agents, and I want to investigate what kinds of mechanisms I could use to assign them (randomly) objects in this fashion.  If I know the agents' utility functions, and I'm a benevolent mechanism designer, I can calculate Pareto optimal mixtures of assignments.  It turns out that (I'm reasonably sure) the resulting assignment can always be expressed in the following fashion: there is some positive affine transformation applied to the utility of each agent (possibly different transformations for different agents) such that the ex post utility obtained by each agent will be at least as high as the utility any other agent would have obtained from the object that agent is assigned.  For example, if there is a positive probability that you will get item A, then your utility (after the necessary affine transformation) for item A is at least as high as anyone else's utility (after the necessary affine transformation) for item A.  In the example of the previous paragraph, in fact, I don't have to do any affine transformations from the form in which I gave the utility functions; the agent who ends up with A assigns it utility 3, the agent who ends up with B assigns it 0, and the agent who ends up with C assigns it 2, regardless of how the coin toss turns out.

Usually, however, I want to work in an environment in which I, as the mechanism designer, don't know their utilities, so my mechanism will have to elicit their preferences in some fashion, and assign random outcomes with a probability that depends on what they indicate.  Whatever mechanism I use, however weird, it will ultimately result in some mapping from sets of utility functions to some probability distribution over assignments.  These probability distributions then generate expected utilities for each agent.  In considering all possible mechanisms, it becomes convenient to ignore the details of the mechanisms (at least for a while), and simply consider the properties of the induced map that takes a utility function from each agent and produces an expected utility for each agent.  If the agents know how the mechanism works, though, they may lie; I can only use mechanisms where no agent would prefer the outcome he could obtain by claiming to have a different utility than what he really does.  If I consider the mechanisms that are "incentive compatible", and just consider the problem from the standpoint of a single agent, given whatever that agent knows about other agents' actions, the expected utility as a function of what that agent does must be "incentive compatible".  It turns out that this imposes relatively few constraints, and they include a constraint we might prefer to have anyway: the expected utility must transform under positive affine transformations in the same way as the utility function does.  That is to say, if reporting "u" gets you an expected utility of 5, then reporting "2u+3" must get you 13.  Both functions represent the same preferences, and it is necessary that they give the same actual outcome, however it's parameterized.  The only other constraint that incentive compatibility imposes is convexity.  Since the function is necessarily homogeneous of degree 1 and is convex, it is the support function of some set; that is to say, whatever the mechanism is, given other agent's actions (to the extent this agent knows them), there is some set of lotteries over assignments such that this agent's expected utility from the mechanism will be the expected utility that the agent would derive from its favorite element of the set.  [Update: this is actually obvious for a simpler reason.  Consider the set of lotteries consisting of any lottery that would be assigned to some utility function; then if the agent, for any utility function, is assigned a lottery to which he prefers some other element of the set, that agent will lie, claiming to have the utility of an agent who is assigned the element he prefers.]

This is appealing in some ways, though unappealing in others.  In a large n environment with symmetry among the agents, I might imagine that I can announce the effective set of lotteries, and that each agent is choosing among the same such set.  Now, in fact, the set of options an agent has will generally depend on the choices other agents make; the announcement, therefore, would have to be made after everyone's action is taken, so this "announcement" would not be "here's your choice of lotteries, which do you prefer?", since I need your action in order to find out other agents' sets of options.  It could be a useful good-faith auditing mechanism, though, to say "this is the set of options that were generated, and thus this is the lottery you chose by way of your action that implied that it was your preference, and the result of the lottery is that you get object H".  However, while the effective set of choices will generally depend on other agents' actions, incentive compatibility requires that the set cannot depend on this given agent's action; in a large n environment with a normal amount of information, it might be that we can produce a set that depends only slightly on each agent's action, and thus is practically incentive compatible.  It is not, however, strictly incentive compatible.

Suppose we're looking for "Nash equilibrium" type settings, in which everyone knows what everyone else will do but still chooses to do what they were going to do anyway, and we want to spell out a full mechanism.  The actions of every set of n-1 agents determine the choices available to the final agent, who is free to choose among them; on the other hand, if the agent gets a real choice, the other n-1 agents will have to, by their actions, be selecting "whatever's left".  "Nash equilibrium" type settings, though, may not make practical sense; the mechanism design literature includes implementation in Nash equilibrium by taking advantage of the idea that everyone knows everyone else's preferences, so that you can ask agents for each other's information.  (I've assumed this away, implicitly, by suggesting that one's action is only a function of one's own preferences.)

It seems likely to me that, in situations with large numbers of individuals, I can generate shadow prices for the options such that each agent, in a pretty good Pareto efficient equilibrium, has (at least approximately) its preferred lottery among those with expected shadow price of 0 or less.  I'm not quite sure of that, though.  In a particular small case, suppose three individuals have preferences B>C>A, A>C>B, and A>B>C, with each indifferent between its second choice and a coin toss between the first and third choices; a coin toss between allocations BCA and CAB is, I believe, Pareto efficient, and seems in some ways like the logical lottery to hold, but it can't fit the preceding description, since 50/50 lotteries between each pair must be part of the choice set (and thus have non-positive expected shadow price) but no single item may be part of the choice set (and thus presumably each of them has a positive shadow price).  Perhaps the next questions for me to answer would be 1) is there a different Pareto efficient outcome here that can be generated in the proposed fashion, 2) are these indeed part of an incentive compatible mechanism, and 3) can I characterize under what conditions this becomes a good scheme?

Update: Oh, well, I suppose I've well established that the choice sets depend on what the other agents are doing, so all that's really necessary here is that agent 1 have a 50/50 B/C option, agent 2 have a 50/50 A/C option, and agent 3 having a 50/50 A/B option, all as the respective best options.  B, A, and A must respectively not be available in higher probabilities with these mixtures, so if agent 1 sees shadow prices of B>0>C, agent 2 sees A>0>C, and agent 3 sees A>0>B with equal absolute values, we could get choice sets of these sorts.  In particular, B has a positive shadow price for agent 1 and a negative shadow price for agent 3; agents 1 and 2 together in some sense overdemand B, while agents 2 and 3 in some sense underdemand it.  It seems reasonable to think that any option that is the first choice of one agent would have a positive shadow price for other agents in a reasonable symmetric mechanism.  More generally, if there are m other agents whose first m choices are the same, then they and I can't all be allowed to choose a lottery that guarantees us an element of that set, so it seems likely that in a symmetric mechanism that, any time m agents' top m choices are the same, all other agents face a positive shadow price for each of the m items.  Note, though, that the "Boston mechanism" violates this principle; if two agents have A as their first choice and B as their second, the Boston mechanism would allow other agents to simply take choice B.

Update: Suppose agents are restricted to a set U of possible utility vectors such that 1) if two elements of U are distinct, they indicate different preferences -- i.e. if u is in U, then 2*u+3 is not, and 2) U is convex.  Now consider the following program: On the space Un×Rn find the graph of the correspondence from utility function profiles to feasible expected utility profiles, find a way to define and construct the closure of the convex hull of the upper contour set, and hope that its boundary represents the graph of a function from Un→Rn. That thing should be convex, and "optimal" in some sense. Can it be constructed intelligibly, and, if so, can it be done in a computationally efficient manner?
I suppose another angle is in fact to start from serial random dictatorship as a benchmark; what mechanisms might pareto-dominate it, or (in some sense) nearly do so? In the special case where the other agents all have the same preferences (preferring 1>2>3>...>n), I can choose a lottery that gives me each item with probability 1/n, or I can shift some probability from more preferred to less preferred (by other agents) items, but the constraint I face can't be fully "linearized"; for n=3, I can go from (1/3,1/3,1/3) to (0,2/3,1/3), but I can't go to (0,0.67,0.33), which I might imagine if I'm thinking in terms of trading 100/300 of the most popular item and 1/300 of the least popular in exchange for 101/300 of the second-most-popular; allowing this sort of exchange is precisely where I might expect to be able to offer gains from trade between agents with the same ordering but different vN-M preferences.