Thursday, December 13, 2012

bank runs and time consistency

Diamond and Dybvig, in their famous model of bank runs, note that a bank regulator who commits to withdrawal freezes can forestall purely self-fulfilling runs; I no longer have to worry that my fellow depositors are going to withdraw first.  A few years ago, then Fed economists Ennis and Keister noted that a regulator may not be able to commit to such a policy, desiring in the midst of a bank run to allow some agents with highest demonstrated need to have privileged access to withdrawals, and that if agents anticipate this behavior the purely self-fulfilling run can still take place.  A somewhat fanciful solution is to note that, if somehow there were a liquid market for claims on bank deposits, people with high liquidity needs could sell their deposits to people with low liquidity needs; in fact, as long as the situation was expected to sort itself out within a week or two, the claims would probably trade very close to par, and, if this sort of solution seemed in the offing, there again would be very little incentive to trigger a self-fulfilling bank run.

How might one hope for such a liquid market to come into being?  The best idea I can think of is to make the bank into a market maker.  The bank could first, perhaps, be allowed to pay out to depositors at a rate conservatively estimated to be feasible if all depositors were paid; thus if depositors as a class are believed to be, with high probability, able to expect 25% of their deposits back in the event of failure, you allow depositors to claim deposits at 25 cents on the dollar.  That puts a floor on the market.  I'm hoping, though, that that would be irrelevant if the bank is also allowed to accept new deposits at a premium, and pay out on old deposits at a similar ratio.  For example, suppose a 10% rule: if you come to the bank right now and give it $10, your balance at the bank will go up by $11.  If the bank collects $180 this way, it is allowed to use that $180 to pay back old depositors at a 10% discount; i.e. if 30 people each line up to withdraw $9, reducing their account balance by $10, you pay out $9 each to the first 20 people in line.  Once the segregated, new funds are gone, the other 10 people can choose to get $2.50 or to wait until someone comes by to deposit more money.

On as rapid a basis as is practical, you could in fact change the 10%.  Again, if this is purely self-fulfilling, a 10% offer should bring in a lot of new money (which, among other things, would be somewhat costly to the bank).  On the other hand, perhaps in some situations in which there is real concern for the bank's solvency, you would have people standing around waiting to pounce on any new money that came in.  If the bank has a lot of new money, it could start cutting the 10% to 9%, and then 8%; on the other hand, if there are a lot of people waiting to redeem, you might start upping it to 11%, then 12%.

Again, if the bank is well-known to be solvent, this is all "off the equilibrium path", which is, of course, not to say that it's unimportant; the fact that it is credible is what keeps it from being needed.  I wonder whether it or something like it would be credible under some set of institutions that exists in the real world.

Wednesday, December 12, 2012

neural networks and prediction markets

I should start by noting that this idea is insane highly speculative, even by the standards of this blog.

I've been a bit obsessed with neural networks in the past two months. "Neural networks" constitute a class of models in which one seeks to generate an output from inputs by way of several intermediate stages (which might be loosely thought of as analogous to "neurons"); each internal node takes some of the inputs and/or outputs of other internal nodes and produces an output between 0 and 1, which then may be fed on as input to other nodes or may become the output of the network. With an appropriately designed network, one can model very complicated functions, and if one has a lot of data that are believed to have some complex relationship, trying to tune a neural network to the data is a reasonable approach to modeling the data.

This "tuning" takes place, typically, by a learning process in which the data in the sample are taken, often one at a time (in which case one would typically iterate over the data set multiple times), with some candidate network weights (typically initially random) are used to calculate the modeled output from the given input; the given output (associated with the data point) is then compared to the modeled output. The network is designed such that it is relatively efficient to figure out how the modeled output would have differed for a slightly different value for each weight — one effectively has partial derivatives of the output with respect to each weight in the network — and a certain amount of crude nudging is done to the entire network, more gently if the modeled output was already close to the actual target output, more forcefully if it was farther away, but generically in the direction of making the network predict that point a bit better if it sees it again.

One of the problems with neural networks — and one I'm going to exacerbate rather than ameliorate or exploit — is that the resulting internal nodes typically have no intuitive meaning. You design your network with some flexibility, you go through the data set tuning the weights to the data, you come back with perhaps a decent ability to predict out-of-sample, but if someone asks "why is it predicting 0.9 this time?" the answer is "well, that's what came out of the network", and it's typically hard to say much else. They may still be useful in somewhat indirect manners as a tool for actually understanding your data generating process, but even if they can perfectly model the sample data, they at best provide descriptive dimensional reduction from which direct inference is essentially impossible.

Now, I've been interested in prediction markets for longer than I have neural networks, though perhaps less intensively, especially in the recent past. Prediction markets typically are intended to aggregate different kinds of information into a sort of best consensus prediction of, typically, the probability of an event. If different people have different kinds of information, then combinatorial markets are valuable; if I have very good reason to believe that the event in question will occur provided some second event occurs, and someone else has very good reason to believe that the second event will occur, then it may be that neither of us is willing to bet in the primary market, but if a second market is set up on the probability of the secondary event, the other guy bids up the probability in that market and I can bid up the primary market, hedging in the secondary market. (A true combinatorial market would work better than this, but this is the basic idea.) In principle, a large, perfectly functioning combinatorial market should be able to make pretty good predictions by aggregating all kinds of different information in the appropriate ways, such that the market itself makes predictions that elude any single participant. (cf. a test project for this sort of thing and some general background on that market in particular.) The more relevant "partial calculation" markets are available, the better the ability to aggregate to a particular event prediction is likely to be.

There would be some details to work out, but it seems to me one might be able to create a neural network in which the node outputs are not specific functions per se of the inputs, but are simply the result of betting markets. Participants would be paid a function of their bet, the output, and the inputs, as well as the final (aggregate network) realized ("target") result. If you make a bet on a particular node, the incentives should be similar to the "tuning" step in the neural network problem: you should be induced to push the output of that node in such a direction that the markets downstream from you are likely to lead to a better overall prediction.

It's quite possible there's actually no way to do this; I haven't worked it out in even approximate detail. It's also possible that it is possible in principle, but there would be no way to get intelligent participation because of the conceptual problem with neural network nodes that I mentioned before. If it did work, in fact, I suspect it would do so by means of participants gradually attributing meanings to different nodes. If this could be made to work in something like the spirit in which neural networks are usually done, this would improve slightly on the combinatorial markets in that
  1. The intermediate markets would be created endogenously by the market; that is, something the market designer didn't think of as a relevant partial calculation might end up being ascribed to a node because market participants had reason to believe it should be; and
  2. These intermediate markets may not need a clear (external) "fix"; that is, some events are hard to define, but as long as participants have a sense of what an event means, it doesn't have to be made precise.
Let me clarify that second point with an example: perhaps the main event is the Presidential election, and one partial result is whether Romney wins the second debate. One might in principle be able to make this precise — "... according to a poll by CNN immediately following the debate," etc. — but if the payout isn't officially determined by that, and is only officially a function of observable market prices and the final result (who wins the election?), it can take on a subjective meaning that never officially has to be turned into something concrete. Indeed, it may be that different agents would believe the node to mean different (probably highly correlated) things; perhaps some people think it means Romney wins the second debate, and others think it means he wins "the debates". Only the result of the election needs to be formalized.

Tuesday, December 11, 2012

football playoffs

Major college football has been very gradually moving toward a playoff system, with many fans clamoring for a quicker move to a larger playoff. Proposed playoffs are almost always single-elimination; insofar as one is more likely to accurately determine which team is the best on the basis of more information than less, allowing a single loss in an expansive post-season playoff to eliminate a team from contention, with the playoff including a number of teams that lost two or even three games during the regular season, amounts to throwing away information, and makes it less likely, not more, that the winner of the playoff will actually have been the "best" team on a season-wide basis. A compromise idea I've played with is privileging the teams that seem, on the basis of the regular season, most likely to be the best, but allowing lower-ranked teams into the playoff under less advantageous terms; a top team would be permitted to remain in the playoffs after a loss, while a lower-ranked team would not. A couple years ago, I suggested a playoff system to my brother and he informed me that the playoff system for the Australian Football League is essentially what I had proposed. A couple weeks ago, CNNSI assembled a mock committee of actual people who might be on a real committee selecting a college football playoff as will happen in a couple of years. If I use this to seed an Australian Football League style playoff, I get a bracket like
Notre Dame
GeorgiaTexas A&MTexas A&M
Texas A&MOregon
FloridaNotre DameAlabama
LSUStanfordNotre Dame

with game results projected in the first three rounds in part to clarify the structure of the bracket; two games in the first round pit top 4 teams, and Florida and Notre Dame, by virtue of losing those, are sent across the bracket to their second-chance games against lower-ranked teams that entered on single-elimination terms.  The semifinals here feature Oregon against an SEC team and Notre Dame against an SEC team.
One feature of playoffs in all four "major professional sports" in the US(/Canada) is that the leagues consist of two "conferences" and that the playoffs keep each "conference" separate; the playoff systems thereby create a championship match (or series) that has one team from each half of the league, rather than seeking straightforwardly to pair up the top two teams. In baseball the two halves of the league play with slightly different rules, and in basketball and hockey they have a certain geographical logic, but in the NFL in particular the division is entirely historical; having "AFC champions" and "NFC champions" I suppose gives the team that loses a little bit more euphemistic title than "Super Bowl loser" — and perhaps even a team that "has been to n of the last m AFC championships" even feels it's accomplished more than a team that "has been to n of the last m quarterfinals". It feels a bit hollow to me, and I'd just as soon see a single tournament. In the regular season, at the moment, each team plays 12 games within its conference and only 4 against the other conference; this could be modified, but as a first step, perhaps we should take six teams from each conference into the playoffs, seeded separately, and have them play
The top two teams from each conference don't get byes — at least not at first.  They are placed in a four-team single elimination tournament of which the winner gets a double bye, both the third and the fourth round.  The other eight teams play their own games, of which the winners "catch" the three teams that lose from the top-four tournament; the first-round losers play 4/5 winners in the second round, and the second-round loser plays the "champion" of a four-team 3/6 tournament in the third round, as the winners of the 4/5-1/2 games play each other.  After round 3 there are 3 teams left: the undefeated 1/2 team, and two teams that are each either an undefeated 3–6 seed or a one-loss 1/2 seed. If we let the latter teams play in round 4, the winner of that gets a(nother) shot at the undefeated 1/2 team. A 3–6 seed can win the championship, but needs five wins in a row, with a few of them probably top seeds; a 1/2 can win with a loss, but it requires that the team go 4–1. Perhaps a different visualization would be useful; one spot in the Super Bowl is filled by a four-team single-elimination tournament, and the other is filled by
AFC 1w(loser)w
NFC 1w
AFC 3ww
NFC 3w
AFC 1(loser)ww
AFC 4w
NFC 1(loser)w
NFC 4w
Note that the 1/2 games, listed twice in the table, aren't played twice; a w denotes that the winner of the previous round advances to that spot, while (loser) denotes that the loser from the previous round advances to that spot.

As a couple final remarks,
  • The NFL playoffs as currently constituted are, as far as I know, unique in that there is not a fixed "bracket"; a team that gets a bye into the second round doesn't have a particular game of which it plays the winner. What I have produced here is a more traditional "bracket" in that sense. I'm not necessarily opposed to the NFL's system in that regard; this is just what I did.
  • Major college football does have a number of "conference championship" games, all of which take one team from each of two "divisions" of a conference, rather than taking the top two regardless of division. This year Ohio State, because of previous misdeeds, was ineligible to play in the championship game of its conference, but was declared the champion of its division; the team in its division that finished highest in the standings while also not being under instutitional sanctions went to the championship game instead. There was some lack of clarity, midway through the season, as to whether Ohio State would be allowed to be the "division champion"; it seems to me that the decision that was made vitiates much of the reason for the structure of the championship game. If the point isn't to match the two "division champions", it should be to match the top two teams. The asymmetric schedule makes a case for some preference toward having teams from different divisions, but in this case the team that went in Ohio State's place was a full two games behind a team from the other division that didn't make the championship game and that would have seemed to have a rather better case for being invited.
Update: Let's do a mock bracket.  (Updated Dec 31.)  I'm adopting a proposal by SI writer Petere King that division champions be guaranteed a playoff spot, but not a top 4 seed. I'm also, naturally, making some guesses about how the rest of the season will play out; that said, my teams are
New EnglandSan Francisco
HoustonGreen Bay
which might lead to something like
DenverSan Francisco(New England)New England
San Francisco
New England
New England
HoustonHoustonGreen Bay
Green BayGreen Bay
San Francisco
New England
and the winner of the New England vs. Baltimore match gets to play San Francisco in the Super Bowl.  To clarify, a team in parentheses lost its previous game in order to land in that spot.

I will note some features of the bracket that should perhaps have been mentioned before (they aren't specific to this simulation):
  • As long as no lower seed beats an upper seed, teams from the same conference won't play each other until at least the third round; any such matchup must follow a team beating a seed at least as high as itself.
  • Two teams will not play each other a second time — there will be no "rematches" — until at least the fourth round (as there is in the mock bracket), at which point there are only three teams left and you're running out of ways to avoid them.
Both these points are to say that I've laid it out to create a lot of "mixing", so that teams that are a bit hard to compare before the game — they're in different conferences, so they have few common opponents, and they haven't played before in the tournament at least — are more likely to be paired than teams that are more readily ranked relative to each other.