Thursday, October 22, 2015

security lending and market structure

There is a very active market for "repo" loans, where "repo" is short for "repurchase agreement", and what are understood to be collateralized loans are structured as sales with an agreement to repurchase. If I have a bond worth $1020, and I sell it to you for $1000 today while simultaneously agreeing to buy it back from you at $1001 in one month, I am effectively borrowing $1000 at an interest rate of .1% per month; if I go bankrupt or otherwise fail to pay you back, you have the bond, and even if the value of the bond has dropped, as long as it drops less than about 2%, you're not losing any money. Similarly, you may have cash that you want to put somewhere safe with a little bit of interest for a short period of time, and go looking for that transaction; if you're initiating it, perhaps you have to lend a bit more (perhaps more than $1020) against the bond to protect the other party against your default, or perhaps you find someone looking to borrow and willing to overcollateralize the loan as usual. The terms are somewhat malleable.

Another reason for participating in the repo market is that, rather than trying to move cash onto or off of your balance sheet, you have a bond that you want to move onto or off of your balance sheet; instead of looking to borrow or lend cash, you want to borrow or lend the bond, which is just the flip-side of the same transaction. It may be possible for you to safely invest money at a (slightly) higher interest rate than you can borrow in the general repo market, but it is more often the case that particular bonds will enable you to borrow at an even cheaper rate, possibly even negative — you sell the bond for $1000 with an agreement to repurchase it for $999 a month from now. When this is the case, it is generally because some other market participants want to borrow that particular security, and are willing to pay a premium to do so. If general interest rates are .1% per month, a repurchase agreement at -.1% per month is perhaps best viewed as a loan of the security, secured by cash, with a $2 fee for borrowing the bond for a month. In fact, it is likely that you would take the $1000 and turn around and lend it back into the repo market for that .1% per month rate, on net swapping a "special" security for a "general" one on your balance sheet now, but with agreements in place to let you swap them back in a month while clearing $2 in the process.

The reason one would want to borrow a security (and would be willing to pay a lending fee — or, in some sense, to forego interest on money one is lending out — in order to do so) is sometimes that one has contractual obligations for some reason to deliver a particular security to some other party (as, for example, if I had written a call option on the bond, and it has been exercised), but most often (I believe) it is because the person looking to borrow it expects it to go down in price (or is afraid it will, and is looking to hedge the risk).  In this case, after you (formally) sell them the bond, they will sell it again; if you sell it at $1000 with an agreement to buy it back at $999 a month from now, and they sell it for $1000, but can buy it back at $998 a month from now, they clear a profit on the trade.  (Again, this may be intended to offset a loss that they expect to incur somewhere else in the case that the price does go down; whether they're making an affirmative bet on a drop in the price or hedging an existing risk doesn't make much difference here.)  In this case it seems to me that it would be more natural not to buy the bond in the first place; what they really want is the agreement to sell the bond for $999 in one month, and this is the easiest way to do that.

Up to this point I've been using a pretty plain loan structure for purposes of illustration, but repo loans (and other security loans) are often done differently.  A typical example would be that we agree on an interest rate for the loan, but not on a fixed maturity; it goes until one of us decides to terminate it, subject to a reasonable notification period, after which we close it out.  The "forward contract" now looks a little bit stranger as a forward contract, but not especially strange: the price at which it will be executed changes over time, quite possibly in a linear fashion (e.g. by 3 cents per day), and (as with the loan) will be executed at some point in the future, after one of us notifies the other that we wish to close it out.  If we decide that a haircut is appropriate, one of us may post collateral (e.g. $20) with the other.

I'm curious as to why there isn't a developed market for these agreements absent the repo market.  The answer that seems most likely to me is that the repo market serves the money lending and borrowing function that it serves, and is quite liquid; one market for lending and borrowing and another for forward contracts with a separation between the two would presumably make both markets less liquid than they can be if they're combined.  Another possibility is related to the fact that if I borrow a security from you so that I can sell it, I'm not selling it back to you; the net result to me is the same as if I managed to enter into a forward contract with you, but I'm also intermediating the sale of the bond from you to someone else. The economic exposures are similar to what would result if I had entered the forward agreement with the someone else instead of with you — we could just cut you out of this — but I wonder whether there are important reasons why people looking to buy a bond want to buy the bond, rather than enter a forward contract to buy it, while people who want to hold a bond are willing to enter such a forward contract while simultaneously selling it. A possible reason for this would be related to custodial practices; in particular, retail investors may not be able to enter such forward agreements, but their brokers may be able to "lend out" securities held on their behalf (subject to various safeguards), so that the party that is institutionally capable of entering the buy side of the forward contract is also institutionally required to maintain a net zero exposure while doing so.

coordination and carbon prices

Some applied game theorists write in Nature that an international agreement to an effective price of carbon emissions would be more likely to work out than an attempt at an international cap-and-trade scheme, and their arguments seem basically right to me.  In at least some sense, though, I think we could do better.

Suppose, since this seems to be the discussion being had at the moment, that we can treat each country as a rational agent, and suppose each country knows the effective price in each other country. Let wi denote more or less the size of a country normalized so that ∑i wi = 1 and suppose each country wants to maximize Ui = aP-bP2-pi where P=∑ wipi ; this at least captures the idea that each country would like to minimize its own price but (in the relevant range) wants the world price to be higher, but subject to diminishing returns. If we each agree to set our price at c+dP, where 0<d<1, then if country i tries to cheat by δ, that reduces P and thus reduces other countries' prices as well, resulting in an overall decrease of δ/(1-d); its own cost is awiδ/(1-d)-2bPwiδ/(1-d)-δ, which is positive if (a-2bP)wi>(1-d). If you're trying to optimize ∑wi Ui and you set d=1-wi, then country i will want to comply with the optimal price as long as everyone else does.  Different countries will have different wi, and for reasons made clear in the article you really couldn't give different countries different values of d and expect it to work out well — you'd have the same problems currently encountered with the cap-and-trade negotiations.  1/3, however, is a reasonable maximum value, which suggests that setting d to at least 2/3 would substantially reduce the incentive to cheat.

I suspect two potential problems with my scheme, especially vis-a-vis theirs: 1) mine is a bit more complicated, which they note often empirically impedes agreement and coordination, and 2) mine may be more susceptible to problems with monitoring; countries would have a stronger incentive in my scheme than in theirs (depending on what enforcement mechanisms they envision) to make the rest of the world think their price is higher than it really is.

Friday, October 16, 2015

rational agents in a dynamic game

I've mentioned this before, but I'll repeat from the beginning: Consider a game in which I pick North or South, you pick Up or Down, and then I pick East or West, with each of us seeing the other's actions before choosing our own.  If I pick South, I get 30 and you get 0, regardless of our other actions.  If I pick North and you pick Down, you get 40 and I get 10 regardless of our other actions.  If we play North and Up, then I get 50 and you get 20 if I play West, and I get 30 and you get 60 if I play East.
I playYou playI getYou get
North / EastUp3060
North / WestUp5020
NorthDown1040
South300
We each have access to all of this information before playing, so you can see what will happen; if I play North and you play Up, I will play West, which gives you 20, while if I play North and you play Down, you get 40.  We therefore know that you will play Down, so I get 10 if I play North, and I get 30 if I play South, so we can work out that I will play South.

This, however, leads to a bit of a contradiction.  "If I play North and you play Up, then I will play West" entails some behavioral assumptions that, while very compelling, seem to be violated if I play North.  If I play North, regardless of what you play, your assumptions about my behavior have been violated; it is, in fact, a bit hard to reason about what will happen if I play North.  If I'm just bad at reasoning, you should probably play Down.  If you're mistaken about the true payoffs — perhaps the numbers I've listed above are dollars, and my actual preferences place some value on your payoff as well — then it might make sense to play Up, depending on what you think my actual payoffs might be.  Perhaps I'm mistaken about your payoffs (in which case you should probably choose Down).

In mechanism design, it is important to distinguish between an "outcome" and a "strategy profile" insofar as leaves on different parts of the decision tree may give the same outcome, but in the approach to game theory that does not separate those, you don't gain much from allowing for irrational behavior; given any sequence of behavior, you can choose payoffs for the resulting outcome that make that behavior rational.  The easiest way to handle this problem philosophically, then, is to treat it as being embedded in a game of incomplete information, in which agents are all rational but not quite sure about others' payoffs (or possibly even their own).  In the approach to game theory that I've been trying to take lately, though, where we look at probability distributions of actions and types where agents may have direct beliefs about other agents' actions, "rationality" becomes a constraint that is satisfied at certain points in the action/type space and not at others, and it's just as easy to suppose players are "almost rational" as that they are "almost certain" of the payoffs.  I wonder whether this would be useful; it might clean up some results in global games, by which I mostly mean results related to Carlsson and van Damme (1993): "Global Games and Equilibrium Selection," Econometrica, 61(5): 989–1018.

liquidity and stores of value

It has sometimes been asserted that money is something of an embarrassment for the economic profession; a lot of the older models especially tend to assume perfect markets (or, more typically, markets that are only imperfect in one or two ways of particular interest at a time), and perfect markets have no need for a unit of account or a medium of exchange.  One of the first models that attempted to explain money, then, was Samuelson's 1958 paper in which interest rates without money were negative, such that money provided a store of value that gave a better return than other stores of value.

I've never really appreciated this, because the idea seems wrong; there are a lot of other assets that typically (before the last 7 years) have higher returns than money that seem better in every way except for liquidity.  Surely the reason money has social value is that it provides a medium of exchange; in particular, money can be exchanged more readily than the other assets for other needed goods and services at a moment's notice.

On some level, though, this is a false distinction, or at least one that in practice is blurred.  A treasury bill maturing in three months is a great store of value for storing value from now until three months from now; it's not quite as good for storing value from now until one month from now.  Insofar as prices are stable, a dollar is a good way of storing value between now and whenever you want.  Insofar as you can sell a treasury bill for pretty much what you paid for it a month from now, it does a pretty good job, though; the market liquidity of a treasury bill makes it almost as good as cash.  A three month (non-tradable) CD is much less suitable.

If, conditional on the event that you need to make a purchase a month from now, the price at which you can sell an asset is correlated with the price of the good, that asset might actually be a better store of indeterminate-period value than dollars are.  If the correlation is weak or negative, assuming you're risk averse, it's less suitable.  If it's likely that, conditional on a sudden desire for cash, the price at which you can sell is likely to be low, it does a poor job as a tool for precautionary saving — regardless of whether the price at which it could be purchased has fallen as well.  As has been previously noted, you don't so much care, when buying a financial asset, whether the bid-offer spread will be tight when you want to sell, just what the bid per se is likely to be.

A point I've been making in some forms in various venues for a while is that the value of a store of value is affected by who the other owners and potential owners of the asset (or even its close substitutes) are; if a particular asset looks like a good store of value to a certain subset of the population, it may become a poor store of value for that subset of the population if that subset is characterized by a similar set of exposures to liquidity shocks.  If all people who have a last name starting with J face a liquidity risk that would otherwise be well hedged by the possession of a store of beanie babies that could be sold, that could well lead a large number of beanie babies to be owned by people with a last name starting with J.  If the risk materializes, we're all trying to sell our beanie babies at the same time.  If people acquire assets without considering who else owns what, this sort of "fire sale" risk develops naturally for any liquidity event that is likely to affect a substantial portion of the economy while leaving another substantial portion of the economy unscathed; some set of assets that are otherwise well-suited to protecting against the risk become concentrated where they are likely to result in a fire sale.  If the rest of the population is able and inclined to step in and buy, this problem may not be insuperable, but for most reasonable market structure models it's likely to create at least some hit, and if the asset is inherently less attractive to people whose names don't start with J than people who do, their willingness to step in may be minimal.

This, though, is as true of dollars as it is of beanie babies; possibly more so.  Dollars are only valuable insofar as someone else is willing to trade something for them when you need it.  If the residents of a particular country are all trying to spend down savings at the same time, they may find that they drive down the value of their currency to an extent commensurate with their tendency to save in assets denominated in that currency.  It is commonly suggested that Americans don't put enough of their retirement savings abroad; especially for Americans in large age cohorts, this is effectively one of the reasons to diversify globally rather than only investing domestically.

Friday, October 9, 2015

mechanism design and voting

"Voting systems" are mechanisms, but we also design mechanisms for situations that aren't usefully construed as voting; in terms of practically used mechanisms, I'm thinking especially of auctions and other allocation and matching mechanisms.  Typically these allocation mechanisms try to optimize the outcome in some sense; where centralized matching mechanisms have replaced decentralized systems, they often serve to overcome coordination problems and result in Pareto-efficient outcomes, for example, but Pareto-efficiency is famously weak, and it has been shown, for example, that different school-choice algorithms are optimal under different circumstances, even using a single ex ante expected social welfare criterion.

In the literature, there is typically a natural or convenient social welfare criterion, but in many real-life contexts, different people have different ideas about the "right" social welfare criterion — which brings us back to voting mechanisms.  Insofar as people vote on the basis of ideas at all, people vote primarily on the basis of their conception of the "social good", and only to a much smaller extent, if at all, on "self-interest".[1]  One might therefore imagine a two-step procedure in which some mechanism elicits from people their conception of "justice" or "social welfare" in the first step and then asks them for their personal preferences as to their own allocation in the second step, using a mechanism tuned to maximize the criterion selected in the first step.

It is generally the case in theory that a single combined mechanism for doing two things will perform better than multiple separate mechanisms; roughly, if you assume agents are strategic, you sometimes have to "buy off" agents to get them to reveal as much information as possible, and if you combine the mechanisms you can "buy off" the agents in one stage with compensation in the other stage, sometimes at a lower overall cost.  There's some level on which it may be useful to think of proportional representation voting schemes themselves in this way; putting aside practical reasons for them related to information-gathering and gaining buy-in from electoral minorities (avoiding e.g. criminal behavior in response to laws perceived to be invalid), one might have a higher-order desire that a committee reflect other people's preferences as well as one's own, even if bills supported by the majority and opposed by the minority are going to be passed under either system, whether by a close vote of proportionally elected representatives or a landslide vote in a chamber dominated by the electoral majority.  I suspect there might be other interesting mechanisms that join what are more clearly separate "What is our consensus social goal in terms of heterogeneous and unknown preferences?" and "What are our different preferences, and what outcome therefore maximizes the socially preferred criterion?" questions even in a purely instrumental kind of set-up.  One caveat to add before posting this, though, is that I expect the strictly "best" theoretical mechanism in this kind of situation to be weird and complex in some important ways, and thus impractical; it might elucidate more practical conjoined mechanisms, but it might turn out that the best approach in practice is to go back and use a two-stage approach in which agents can readily understand each stage.



[1] Interestingly, a lot of people know that they vote primarily on the basis of what is "right" rather than their own self-interest, but believe that most other people, especially their opponents, do not!