Thursday, March 24, 2011

yield-curve targeting

Scott Sumner's big idea is that the Federal Reserve should target expected values of nominal GNP. When he's been his most concrete, he suggests an automatic program to inject certain amounts of money through open market operations when futures on GNP drop below a certain level, and withdraw this liquidity when futures exceed the level. I've been thinking about this and related things for a while, and want to put forth an idea related to them.

First I want to discuss level targeting versus growth (which, in the case of prices, is called "inflation") targeting. I tend more toward the former camp because the time-consistency problems are less bad there; if, in response to 1% inflation in response to a stated 2% target, you continue to assert a 2% target, the market will continue to expect what it continues to expect; you can get self-fulfilling expectations. With a price level targeting system, targeting price levels along a path with 2% annual growth, if inflation comes in at 1% one year, you aim to make it up; markets can form better grounded expectations that, a few years from now, price levels will be around what you've promised, and to the extent that expectations are self-fulfilling will even help you get there rather than wandering off in persistent indifference to your stated policy.

Level targeting per se suffers from a different sort of time-consistency problem, though, or perhaps two different, though closely related, such problems. If prices move too far off the target path, the Fed may be under pressure to revise the target; the choice is then between insisting on remaining firm, possibly requiring deflation or high levels of inflation, or revising the target, in which case a lot of credibility goes out the window. (This working paper from 8 years ago titled "Tough Policies, Incredible Policies?" notes that imposing costs on oneself to generate credibility means one might incur those costs in an extreme event, making the situation worse; this is related.) On a related note, on adopting the regime, a target level has to be decided upon and announced; particularly early on, there will be speculation as to whether the Fed would, at some point, use whatever criteria it used to set the initial target path to set a revised target path.

What I support therefore lies somewhere between a level target and a growth target: revise the target level by some fraction of the deviation from the target path. If this fraction is 1, you have growth targeting; if it's 0, you have level targeting. If you use between 0.03 and 0.05 per quarter — a decay time of 5–8 years — your target is pretty rigid over the course of a year or two of ordinary noise, such that it invites self-fulfilling forces in its favor, but large deviations are partially accommodated, making it more credible that the Fed will continue to maintain the policy in the face of a crisis — abandoning (slightly) its target, but in a pre-determined way, such that markets can form expectations, and those should generally (again) be stabilizing. Further, this policy could be adopted today and would spit out the same target level as if it had been in use for 25 years; not only does this mean that "revising" the target, according to the same criteria, several years down the road would mean no revision, but it means that the Fed builds credibility for the regime more quickly, as it has no less reason to revise its target soon after adoption than it would in midstream, and can demonstrate that the target level wasn't simply chosen to be easier, for political reasons, to hit.

I like the idea of targeting nominal GNP better than targeting prices, partly because it feels like a change in real GDP should be met by an accommodating change in the level of inflation targeted — that is to say, the direction at least is right, that it makes sense to lower the inflation target when the economy is growing quickly and to run monetary policy that is looser than a strict inflation target would give when the economy is weak. I also like it because I think it can be measured better; price targeting requires deciding which prices to target, and measuring those prices requires hedonic adjustments and the like, while nominal GNP, at least in principle, is simple: just add up dollars for everything, regardless of how the goods or services for which they're exchanged have changed from the previous period. (I don't care as much between GNP and GDP, and seem to have gotten sloppy about which I use. They're close — at least as close as are the fed funds rate and the T-bill rate, which I'll implicitly conflate shortly — and I expect Scott has better reasons for supporting the one he supports than I would have for either.)

Over the short run, I would rather maintain the practice of targeting short-term interest rates rather than the quantity of money (any money); the difference between the two policies amounts to a difference in accommodation of short-term variations in liquidity demand as, for example, pay checks clear. It seems likely to me that putting this variation in the quantity, rather than price, of liquidity will impose less volatility on the real economy. I will, at the moment, simply ignore any problem this creates when interest rates are 0. This is part of the privilege of having a blog.

What I'm looking at, then, is a regime of calculating how far GNP is from a target that largely grows at a constant rate, but will absorb deviations over the course of a business cycle, and targeting an interest rate based on the deviation from the trend and expectations of growth in the near future. If growth has been too low lately, we lower interest rates; similarly, if it's expected to be too low in the near future, we lower interest rates.

What interests me — what triggered this post — is that, once you're in a credible policy regime of setting short-term rates based on recent deviations from your targeted long-term growth rate, you don't need to create a market for GNP futures; long-term interest rates are expected future short-run interest rates, which will depend on expected GNP growth. At this point I simply make short-term interest rates a function of the current deviation of GNP from its target and of, say, ten-year treasuries.

I'm not sure yet what relationship is required between the parameters to ensure determinacy, or to make this most nearly result in actually targeting expected GNP (at some given distance in the future), but I had an idea a few years ago — before I started taking macroeconomics classes — that perhaps the FOMC should direct the open market desk to target a certain steepness for the yield curve, just on the grounds that, hey, if long-term interest rates move down, we probably want short-run rates lower, too. I don't know that "1" is the right coefficient to put on any duration of interest rate (or a weighted average of such interest rates); perhaps it would not be. In any case, I have some aspirations at some point to try to write this down and solve for interest rates in terms of parameters and GNP expectations, but that should probably wait until the summer.

Friday, March 18, 2011

mutual underwriting

A particularly weird idea that's popped up in my head in the last month is on some level an extension of the old idea of a mutual insurance company, wherein policyholders are also residual owners of the company; in caricature, everyone pays in somewhat more at the beginning of say a six month period than they expect to lose and everyone gets back a portion of what's left after paying losses incurred during the period. If the overall risk level was higher than initially estimated, people may not get back the rebate they were hoping for, but if the overall risk level is lower, people end up effectively paying a lower premium for the period in which they were covered. They thereby insure their risk by spreading it among their fellow policyholders, remaining exposed to unexpected levels of overall risk, but they don't face the problem of perhaps believing that the insurance company is being overly conservative in setting its premiums — if that's true, the policyholders will ultimately get back the difference.

These require, in some sense, less absolute underwriting, but still require relative underwriting; if 100 homeowners in identical homes are buying insurance, that's easy, but if 12 of them have propane tanks next to their houses and the other 88 don't, charging everyone the same premium doesn't seem as fair. One solution here is to subdivide the groups: let the 88 buy insurance from each other, and the 12 buy insurance from each other, without the cross-subsidy. Each time we sub-divide, though, the insurance becomes less useful — I don't have the law of large numbers working for me terribly effectively when there are only 12 of us, since the whole point of buying insurance was not to be exposed to a risk of large loss, and one twelfth of a house is a large loss — and, since any two houses are different, at the very least in location (e.g. distance from fire stations), this creates a problem in which someone has to decide which houses are similar enough that it is better to throw them in the same pool, and which differences are sufficiently salient that they should be in different pools, even at the cost of a higher variance of outcomes for the policyholders.*

The idea that popped into my head is essentially that we let groups decide on their own which groups to join. Obviously simply saying, "Here are two pools: the safe pool, and the risky pool. Which do you want to join?" isn't going to work — the safe people need to be able to exclude the risky people in some fashion. One idea is to cap the number of people in each group, and let members of oversubscribed groups vote on which of the other (attempted) subscribers to keep; this gets close to the self-underwriting flavor that I was looking for when I thought about it. The same sort of arrangement could apply to health insurance, or even to mortgage lending (in something like a credit union), though that introduces other complications.

The details of the voting would be interesting, though; does everyone in the pool list their favorite 99 co-poolers and the top 100 get to be in the pool? Or perhaps everyone gets to vote for as many as they like, and the top 100 are in. Or maybe rank applicants in order, and give a certain number of points for first-place votes, etc. Or perhaps we should do something recursive; what if the group of the top 100 applicants, as measured by the votes of all applicants, differs from the group of the top 100 applicants as those 100 applicants themselves vote (i.e. excluding the votes of the rejected applicants). What we'd really like is some sort of stable outcome in which everyone is in a group, and nobody would prefer to be in a different group that would be eager to swap that person in for some other current member of the group. Can we get that?

Well, the answer turns out to be "no". Imagine 4 people: Alice, Bob, Carol, and Doug. They are to be divided into two groups of two people. Alice prefers to be with Bob, Bob with Carol, Carol with Alice; Doug is the last choice of all three of them. Now consider a prospective grouping; the person who is grouped with Doug is the first choice of one of the other two, and can go to them and say, "hey, let me join your group." No matter how the four people are divided into groups, there is always someone from each group who would rather be with each other than with their current group; every possible grouping is, in this sense, unstable.

For large numbers of people, with large groups, and with highly correlated preferences — that is to say, if people largely agree on who are the safest risks — then the probability of this being a problem in actual practice get very small very quickly. You could probably use just about any system you want to get groups down to 105, let them whittle it down to 100, and you would almost always have a stable alignment. This theoretical curiosity, then, isn't the biggest problem with the idea, though it is, I think, one of the more interesting.

The bigger practical problem is that it requires, at least in its most naive formulations, that everyone have an opinion about everyone else's level of riskiness, and an easy means of conveying it. I can imagine ways of getting around it, but on some level underwriting is a service provided by the insurance companies, who are presumably more or less expert at it; I would rather let Geico figure out which of my neighbors are the better risks, and allow me to put my efforts toward blogging about interesting but largely impractical ideas.

* In practice, I imagine everyone is thrown in the same pool with some policyholders asked to pay e.g. 1.5 times as much as others; that clearly still leaves an underwriting problem, and leads less naturally to the idea I'm trying to present.

preserving corporate liquidity in a crisis

Update: Apparently something like this has existed for asset-backed loans.

Buffett's letter got me thinking a bit more about liquidity and solvency, and I've slightly-more-baked an idea I had two years ago, in particular to the point where it now contains a policy prescription.

In another post here I mentioned maturity transformation, noting reasons corporations are induced to borrow short-term for long-term needs. The problem is that, if you need to roll over loans every day or two, a market event can put you in default even if you're unambiguously solvent. There's a level on which the obvious response to this is "don't do that", but it seems that the genuine benefit of being able to borrow short for some of your long-term money is of large value, and that the social cost of a large, solvent company having a lot of short-maturity debt in a financial crisis is a great deal less than the official penalty, which is bankruptcy. Aside from this is the time-consistency problem; I would prefer a set of rules that our regulators would be more willing to actually stick with in a crisis, rather than leaving harsh enough ex post consequences that they lack their intended ex ante effect.

At the same time, my leanings are still libertarian, and I prefer that privately negotiated contracts be taken seriously. I also prefer fairly incremental changes to formal rules. At the moment, the regulations relating to publicly traded debt securities are easier for instruments with maturities of no more than 270 days; I'm proposing that this include instruments with maturities of up to 450 days, provided that such instruments
  1. are callable within 270 days, and
  2. have a yield from the call date to the maturity date that is substantially — say 12 percentage points per year — above the yield to call.
The idea is to allow firms to write into their bonds that, in the event of extreme crisis, these private bond-holders, rather than the public, would be essentially providing emergency funding to the company, but under penalty terms of such a nature that the company will seek to avoid abusing this flexibility, or using it when it is not under simultaneous pressure to borrow at longer durations elsewhere.

If this really operates as intended, it may benefit money-market fund holders who own (beneficially) these bonds; if the company is solvent, this is results ultimately in some extra yield, and even for those who are seeking to cash out (perhaps because their own need for liquidity has risen at the same time as everyone else's), the values of the bonds are likely not to fall very much, and may even rise — as I envision it, the primary effect of this sort of clause is to solve a coordination problem, in which all lenders are profitable as long as the firm doesn't need to borrow money it can't borrow in the short term, but in which the last ones out lose if it becomes a race to the exit. The firm, facing a suddenly high cost of funds, would be induced to issue a 3 or 5 year bond a month or two later, whenever it can do so at interest rates even a couple of points above what it might hope to pay by waiting longer, because of the penalty rate on the commercial paper, which it would rather retire as soon as it can.

This may have an adverse effect on the credit profile of these instruments. If a firm is in actual trouble — its flagship product turns out to kill its users or something — it seems that the holders of these instruments will almost certainly lose money, though it seems likely that in a lot of these situations they would be likely to lose that money anyway.* I expect that, if these instruments started to appear, investors would quickly start to get used to them and would price the credit risk reasonably, rather than simply refusing to buy them at any price.

While it is certainly possible that these sorts of instruments would lead borrowers to skew more of their borrowing toward the short end and, more generally, to take fewer steps than perhaps they do now to take other steps to insure their access to liquidity, it seems likely to me that any systematic mispricing of these instruments is likely to make them less attractive to borrowers than they should be; these would be a cheaper mechanism of dealing with liquidity concerns only when they truly do less overall harm than other options available to borrowers. They create a new tool, ultimately, for doing maturity transformation, and solving some market failures associated with that at the present time.

* I have mixed feelings about the extent to which I think short versus long duration lenders should have effectively different seniority in a bankruptcy claim — short-duration lenders, in principle, are in a better situation to see problems coming, and in that sense are more at fault than long-duration lenders, but the problems often develop over long periods of time, and long-term lenders might be in a better position, through the imposition of covenants for example, than short-term lenders in imposing discipline to make sure the borrowers don't get into that trouble in the first place.