Tuesday, November 10, 2015

policy of money as a unit of account

A lot of my conclusions here aren't much different from those of a previous discussion, but I'm going to frame/derive them slightly differently.

Suppose two agents are exogenously matched and given an exogenous date in the future for which they can construct a bilateral derivative; maybe we even require zero NPV, or maybe we allow for some cash transfer now, but either way if they're infinitely clever (and assuming e.g. that they don't anticipate future opportunities to insure before that date, etc.) then I believe the negotiations should leave the ratio of marginal costs of utility for the two agents at that date pre-visible. If we add constraints we modify that, but probably in relatively intuitive ways; if we can only condition on certain algebras of events (coarser than what would in principle be measurable at the final date), for example, then there's an expected value on each of those events that should give the same ratio, and if in some states an agent is unlikely to be able to make a payment, that agent is allowed to be better off than the other agent in that state relative to the usual ratio. Further, if there are a bunch of pairs of agents doing this, and the agents can be put into large classes, but need then to have very similar contracts, I'm probably doing more averaging over agents in each class.

I don't know whether this gets me any closer to an answer, but perhaps this is a useful framework for thinking about monetary policy as the medium of exchange consists increasingly of electronic (even interest-bearing) accounts and centrally-managed money is mostly about the unit of account. Buyers and sellers and debtors and lenders still referencing a given unit of account will tend to have certain risk similarities intraclass and differences interclass that one can try to optimize; if a surprise causes borrowers more pain than lenders, I try to weaken the unit of account, and if a surprise causes sellers more pain than buyers[1], then I try to strengthen the unit of account, and everyone ex ante looks at this and says "doing my contract (legal and explicit or customary and implicit) in this unit of account affords me a certain amount of insurance".

[1] A further note on the inclusion of "buyers" and "sellers" here: on some level this only matters for forward contracts, i.e. if we're entering an agreement to an immediate transaction there's none of this sort of uncertainty that resolves itself between the creation of the contract and its conclusion. Parties to a forward contract take on a lot of the properties of borrowers and lenders, insofar as there is a (say) dollar-denominated transfer in the future to which they've committed. Further, in principle borrowers, lenders, and parties to forward contracts can, as above, create their own risk-sharing contract. As a practical matter, of course, this is likely to be impossible to do perfectly, and it's likely that the extent to which it can be done practically leaves a lot of room for a central bank to come in and improve things. This is a usual theory-meets-practice kind of dynamic, especially in monetary theory; somewhat famously, perfect Walrasian economies don't need money, so a useful theory of money will have to figure out what parts of reality outside of Walrasian economics matters, and incomplete contracts would seem to be a biggie.

I believe, though, that more important than difficulties in contracting formally are informal contract-like substances that result from various incompletenesses in information. Buyers and sellers form long-term relationships that may be "at will" for each party, but are formed typically because one or both parties would incur some expense in looking anew for a counterparty each time a similar transaction was to take place. It seems likely to me that this would result in similar long-term dynamics to a contract, and is likely to involve prices that are sticky in some agreed-upon unit of account, whereupon a benevolent manager of that unit of account would again be trying to optimize as discussed above.

Wednesday, November 4, 2015

LQRE and quasi-strict equilibrium

A not-terribly-standard but conceivably interesting refinement of Nash equilibrium is "the rational limit of logistic quantal response equilibria".  Last night I had myself convinced that it was related to quasi-strict equilibrium; if there is a relationship between the two, though, it's complicated. The simplest precise conjectures that I don't know to be false are (1) any quasi-strict NE has the same payoff as some RLLQRE and (2) any connected set of quasi-strict NE includes a RLLQRE.  It may also be that there is a tighter relationship in 2 player games than more generally; in particular, every game has an RLLQRE, and every 2-player game has a quasi-strict NE, but not every larger game has a quasi-strict NE; it wouldn't surprise me if every RLLQRE in two-player games is quasi-strict, but that obviously can't be true in games that lack a quasi-strict equilibrium.

Monday, November 2, 2015

supply,demand, and fundamental value of assets

There is a sort of investor (who typically regards Warren Buffett as his idol) who insists that the true value of a security is the amount of cash can be expected to spin off, with some appropriate discounting of cash flows that are far off or uncertain.  When deciding whether to buy a stock, the thing to do is figure out this value, and then to buy if the price is much lower than that value, and to sell if it is above that value, and simply stay away if it is in between.  Short-term movements in the price are noise, and should simply be ignored.

There are other investors, more typical of hedge funds or (especially before Dodd-Frank) proprietary trading desks on Wall Street, who do worry about the short-term movements; "The long term is just a series of short terms," they might say, and the more prudent of them will repeat Keynes's dictum, "The market can remain irrational longer than you can remain solvent."  "The price of a stock is determined by supply and demand" is not a view against which it is easy to argue.

It seems as though something "fundamental" ought to enter the demand for a stock at at least some level, but it also seems as though new shelf offerings by companies or even the expirations of lock-up periods should reduce the equilibrium price of the stock.  How do these reconcile?

Modern asset pricing has in some ways moved closer to traditional economics, and in particular the idea that trade should (to some extent) be driven by mutual gains from trade; while many people view financial markets as zero-sum, even in the short-run that is only true ex post; different market participants may have different risk preferences, whether that means more or less risk-averse or exposed to one set of risks instead of another, and at least one socially valuable service of markets is to allow people who own a stock and find that current information implies that it is likely to be correlated with other risks they have to be paired up with people who don't own it who find that its expected return compensates them for any marginal risk it would give to their portfolios.  The value of the asset in this model is in fact the sum of the cash flows it will yield, with appropriate discounts for cash flows that are far out or uncertain — but the appropriate "discounting" depends on each agent's own risk preferences and exposures.  Even if everyone takes a Buffett-style "fundamental" approach, not everyone will agree on which stocks are "overpriced" and which are "cheap"; the various stocks, in equilibrium, will migrate to the people who are best equipped to handle the associated risks, and away from those who are least able.

If a particular stock has a risk-profile that is very different from that of any other stock — if it is not strongly correlated with any other stock, and hedges a risk for which no other stock provides a good hedge — then the agents whose risks have the lowest (signed) correlation with the stock are likely to find it uniquely valuable.  If a stock is fairly strongly correlated with many others available, those will effectively be substitutes (in the demand-theory sense of the word) for it; the demand curve for any particular stock, with the others at fixed prices, will be less elastic; the price of the stock should be relatively insensitive to its own supply.  It seems reasonable to me to think that, with realistic microstructure and transaction cost assumptions, stocks (or just about anything, really) will tend to be better substitutes for each other in the longer-run than in the shorter-run; in the short run, various frictions will more likely gum up the ability of agents to substitute between stocks, so that an exogenous, uninformative increase in supply of a stock will cause the price to drop in the short run (relative to the other stocks) and then tend back toward its previous parity, just compensating the marginal new buyer for the related transaction costs (/ liquidity provision).

Addendum: Related to this is the topic of stock buybacks. Recently some politicians have implied that stock buybacks are a sign of focus on the short-term at the expense of the long-run; if the price of the stock is its fundamental value, though, buying back stock is equivalent to issuing a dividend, and will reduce stock prices relative to what they would have been if the money had instead gone into useful investment. I think the model people have in mind is on some level not dissimilar to the one I've presented here, if perhaps less fleshed out: buying pressure lifts stock prices as people are slow to diversify out of it into similar investments, but ultimately the long-term shareholders are worse off than if real investments had been made instead.

Thursday, October 22, 2015

security lending and market structure

There is a very active market for "repo" loans, where "repo" is short for "repurchase agreement", and what are understood to be collateralized loans are structured as sales with an agreement to repurchase. If I have a bond worth $1020, and I sell it to you for $1000 today while simultaneously agreeing to buy it back from you at $1001 in one month, I am effectively borrowing $1000 at an interest rate of .1% per month; if I go bankrupt or otherwise fail to pay you back, you have the bond, and even if the value of the bond has dropped, as long as it drops less than about 2%, you're not losing any money. Similarly, you may have cash that you want to put somewhere safe with a little bit of interest for a short period of time, and go looking for that transaction; if you're initiating it, perhaps you have to lend a bit more (perhaps more than $1020) against the bond to protect the other party against your default, or perhaps you find someone looking to borrow and willing to overcollateralize the loan as usual. The terms are somewhat malleable.

Another reason for participating in the repo market is that, rather than trying to move cash onto or off of your balance sheet, you have a bond that you want to move onto or off of your balance sheet; instead of looking to borrow or lend cash, you want to borrow or lend the bond, which is just the flip-side of the same transaction. It may be possible for you to safely invest money at a (slightly) higher interest rate than you can borrow in the general repo market, but it is more often the case that particular bonds will enable you to borrow at an even cheaper rate, possibly even negative — you sell the bond for $1000 with an agreement to repurchase it for $999 a month from now. When this is the case, it is generally because some other market participants want to borrow that particular security, and are willing to pay a premium to do so. If general interest rates are .1% per month, a repurchase agreement at -.1% per month is perhaps best viewed as a loan of the security, secured by cash, with a $2 fee for borrowing the bond for a month. In fact, it is likely that you would take the $1000 and turn around and lend it back into the repo market for that .1% per month rate, on net swapping a "special" security for a "general" one on your balance sheet now, but with agreements in place to let you swap them back in a month while clearing $2 in the process.

The reason one would want to borrow a security (and would be willing to pay a lending fee — or, in some sense, to forego interest on money one is lending out — in order to do so) is sometimes that one has contractual obligations for some reason to deliver a particular security to some other party (as, for example, if I had written a call option on the bond, and it has been exercised), but most often (I believe) it is because the person looking to borrow it expects it to go down in price (or is afraid it will, and is looking to hedge the risk).  In this case, after you (formally) sell them the bond, they will sell it again; if you sell it at $1000 with an agreement to buy it back at $999 a month from now, and they sell it for $1000, but can buy it back at $998 a month from now, they clear a profit on the trade.  (Again, this may be intended to offset a loss that they expect to incur somewhere else in the case that the price does go down; whether they're making an affirmative bet on a drop in the price or hedging an existing risk doesn't make much difference here.)  In this case it seems to me that it would be more natural not to buy the bond in the first place; what they really want is the agreement to sell the bond for $999 in one month, and this is the easiest way to do that.

Up to this point I've been using a pretty plain loan structure for purposes of illustration, but repo loans (and other security loans) are often done differently.  A typical example would be that we agree on an interest rate for the loan, but not on a fixed maturity; it goes until one of us decides to terminate it, subject to a reasonable notification period, after which we close it out.  The "forward contract" now looks a little bit stranger as a forward contract, but not especially strange: the price at which it will be executed changes over time, quite possibly in a linear fashion (e.g. by 3 cents per day), and (as with the loan) will be executed at some point in the future, after one of us notifies the other that we wish to close it out.  If we decide that a haircut is appropriate, one of us may post collateral (e.g. $20) with the other.

I'm curious as to why there isn't a developed market for these agreements absent the repo market.  The answer that seems most likely to me is that the repo market serves the money lending and borrowing function that it serves, and is quite liquid; one market for lending and borrowing and another for forward contracts with a separation between the two would presumably make both markets less liquid than they can be if they're combined.  Another possibility is related to the fact that if I borrow a security from you so that I can sell it, I'm not selling it back to you; the net result to me is the same as if I managed to enter into a forward contract with you, but I'm also intermediating the sale of the bond from you to someone else. The economic exposures are similar to what would result if I had entered the forward agreement with the someone else instead of with you — we could just cut you out of this — but I wonder whether there are important reasons why people looking to buy a bond want to buy the bond, rather than enter a forward contract to buy it, while people who want to hold a bond are willing to enter such a forward contract while simultaneously selling it. A possible reason for this would be related to custodial practices; in particular, retail investors may not be able to enter such forward agreements, but their brokers may be able to "lend out" securities held on their behalf (subject to various safeguards), so that the party that is institutionally capable of entering the buy side of the forward contract is also institutionally required to maintain a net zero exposure while doing so.

coordination and carbon prices

Some applied game theorists write in Nature that an international agreement to an effective price of carbon emissions would be more likely to work out than an attempt at an international cap-and-trade scheme, and their arguments seem basically right to me.  In at least some sense, though, I think we could do better.

Suppose, since this seems to be the discussion being had at the moment, that we can treat each country as a rational agent, and suppose each country knows the effective price in each other country. Let wi denote more or less the size of a country normalized so that ∑i wi = 1 and suppose each country wants to maximize Ui = aP-bP2-pi where P=∑ wipi ; this at least captures the idea that each country would like to minimize its own price but (in the relevant range) wants the world price to be higher, but subject to diminishing returns. If we each agree to set our price at c+dP, where 0<d<1, then if country i tries to cheat by δ, that reduces P and thus reduces other countries' prices as well, resulting in an overall decrease of δ/(1-d); its own cost is awiδ/(1-d)-2bPwiδ/(1-d)-δ, which is positive if (a-2bP)wi>(1-d). If you're trying to optimize ∑wi Ui and you set d=1-wi, then country i will want to comply with the optimal price as long as everyone else does.  Different countries will have different wi, and for reasons made clear in the article you really couldn't give different countries different values of d and expect it to work out well — you'd have the same problems currently encountered with the cap-and-trade negotiations.  1/3, however, is a reasonable maximum value, which suggests that setting d to at least 2/3 would substantially reduce the incentive to cheat.

I suspect two potential problems with my scheme, especially vis-a-vis theirs: 1) mine is a bit more complicated, which they note often empirically impedes agreement and coordination, and 2) mine may be more susceptible to problems with monitoring; countries would have a stronger incentive in my scheme than in theirs (depending on what enforcement mechanisms they envision) to make the rest of the world think their price is higher than it really is.

Friday, October 16, 2015

rational agents in a dynamic game

I've mentioned this before, but I'll repeat from the beginning: Consider a game in which I pick North or South, you pick Up or Down, and then I pick East or West, with each of us seeing the other's actions before choosing our own.  If I pick South, I get 30 and you get 0, regardless of our other actions.  If I pick North and you pick Down, you get 40 and I get 10 regardless of our other actions.  If we play North and Up, then I get 50 and you get 20 if I play West, and I get 30 and you get 60 if I play East.
I playYou playI getYou get
North / EastUp3060
North / WestUp5020
We each have access to all of this information before playing, so you can see what will happen; if I play North and you play Up, I will play West, which gives you 20, while if I play North and you play Down, you get 40.  We therefore know that you will play Down, so I get 10 if I play North, and I get 30 if I play South, so we can work out that I will play South.

This, however, leads to a bit of a contradiction.  "If I play North and you play Up, then I will play West" entails some behavioral assumptions that, while very compelling, seem to be violated if I play North.  If I play North, regardless of what you play, your assumptions about my behavior have been violated; it is, in fact, a bit hard to reason about what will happen if I play North.  If I'm just bad at reasoning, you should probably play Down.  If you're mistaken about the true payoffs — perhaps the numbers I've listed above are dollars, and my actual preferences place some value on your payoff as well — then it might make sense to play Up, depending on what you think my actual payoffs might be.  Perhaps I'm mistaken about your payoffs (in which case you should probably choose Down).

In mechanism design, it is important to distinguish between an "outcome" and a "strategy profile" insofar as leaves on different parts of the decision tree may give the same outcome, but in the approach to game theory that does not separate those, you don't gain much from allowing for irrational behavior; given any sequence of behavior, you can choose payoffs for the resulting outcome that make that behavior rational.  The easiest way to handle this problem philosophically, then, is to treat it as being embedded in a game of incomplete information, in which agents are all rational but not quite sure about others' payoffs (or possibly even their own).  In the approach to game theory that I've been trying to take lately, though, where we look at probability distributions of actions and types where agents may have direct beliefs about other agents' actions, "rationality" becomes a constraint that is satisfied at certain points in the action/type space and not at others, and it's just as easy to suppose players are "almost rational" as that they are "almost certain" of the payoffs.  I wonder whether this would be useful; it might clean up some results in global games, by which I mostly mean results related to Carlsson and van Damme (1993): "Global Games and Equilibrium Selection," Econometrica, 61(5): 989–1018.

liquidity and stores of value

It has sometimes been asserted that money is something of an embarrassment for the economic profession; a lot of the older models especially tend to assume perfect markets (or, more typically, markets that are only imperfect in one or two ways of particular interest at a time), and perfect markets have no need for a unit of account or a medium of exchange.  One of the first models that attempted to explain money, then, was Samuelson's 1958 paper in which interest rates without money were negative, such that money provided a store of value that gave a better return than other stores of value.

I've never really appreciated this, because the idea seems wrong; there are a lot of other assets that typically (before the last 7 years) have higher returns than money that seem better in every way except for liquidity.  Surely the reason money has social value is that it provides a medium of exchange; in particular, money can be exchanged more readily than the other assets for other needed goods and services at a moment's notice.

On some level, though, this is a false distinction, or at least one that in practice is blurred.  A treasury bill maturing in three months is a great store of value for storing value from now until three months from now; it's not quite as good for storing value from now until one month from now.  Insofar as prices are stable, a dollar is a good way of storing value between now and whenever you want.  Insofar as you can sell a treasury bill for pretty much what you paid for it a month from now, it does a pretty good job, though; the market liquidity of a treasury bill makes it almost as good as cash.  A three month (non-tradable) CD is much less suitable.

If, conditional on the event that you need to make a purchase a month from now, the price at which you can sell an asset is correlated with the price of the good, that asset might actually be a better store of indeterminate-period value than dollars are.  If the correlation is weak or negative, assuming you're risk averse, it's less suitable.  If it's likely that, conditional on a sudden desire for cash, the price at which you can sell is likely to be low, it does a poor job as a tool for precautionary saving — regardless of whether the price at which it could be purchased has fallen as well.  As has been previously noted, you don't so much care, when buying a financial asset, whether the bid-offer spread will be tight when you want to sell, just what the bid per se is likely to be.

A point I've been making in some forms in various venues for a while is that the value of a store of value is affected by who the other owners and potential owners of the asset (or even its close substitutes) are; if a particular asset looks like a good store of value to a certain subset of the population, it may become a poor store of value for that subset of the population if that subset is characterized by a similar set of exposures to liquidity shocks.  If all people who have a last name starting with J face a liquidity risk that would otherwise be well hedged by the possession of a store of beanie babies that could be sold, that could well lead a large number of beanie babies to be owned by people with a last name starting with J.  If the risk materializes, we're all trying to sell our beanie babies at the same time.  If people acquire assets without considering who else owns what, this sort of "fire sale" risk develops naturally for any liquidity event that is likely to affect a substantial portion of the economy while leaving another substantial portion of the economy unscathed; some set of assets that are otherwise well-suited to protecting against the risk become concentrated where they are likely to result in a fire sale.  If the rest of the population is able and inclined to step in and buy, this problem may not be insuperable, but for most reasonable market structure models it's likely to create at least some hit, and if the asset is inherently less attractive to people whose names don't start with J than people who do, their willingness to step in may be minimal.

This, though, is as true of dollars as it is of beanie babies; possibly more so.  Dollars are only valuable insofar as someone else is willing to trade something for them when you need it.  If the residents of a particular country are all trying to spend down savings at the same time, they may find that they drive down the value of their currency to an extent commensurate with their tendency to save in assets denominated in that currency.  It is commonly suggested that Americans don't put enough of their retirement savings abroad; especially for Americans in large age cohorts, this is effectively one of the reasons to diversify globally rather than only investing domestically.