Monday, October 10, 2011

natural rate of unemployment

The concept of NAIRU* is one of suspect theoretical validity, I think, in part because we don't really have a great comprehensive model of unemployment and inflation; that said, it often seems to have ontological value, insofar as high-frequency changes in the unemployment rate seem to lead high-frequency accelerations in nominal wages, suggesting a possibly changing unemployment rate above which wage inflation decelerates and below which it accelerates. One of the concerns with the length of the recent employment recession, and in particular with the number of people who have been unemployed for a long time, is that these people lose skills (in various senses of that concept), suggesting perhaps that these phenomena will lead to rising NAIRU. It seems to me that, particularly framed as I have done so, this might be a testable question: is the low-frequency component of nominal wage acceleration, with the high-frequency relationship to unemployment backed out, predicted by any combination of previous unemployment rates and previous rates of long-term unemployment?


*Non-Accelerating Inflation Rate of Unemployment.

Monday, October 3, 2011

bad statistics

I was thinking today in my econometrics class that perhaps, a few years down the road, I would like to teach a course in bad econometrics. There would be a unit on data-mining (overfitting), certainly, and multiple comparisons; there could be some coverage of some of the weak IV literature from the '90's; cointegration, heteroskedasticity, colinearity, and other misspecification problems; and certainly the prosaic beginner-level material on comparing data series that have been made incomparable, for example by deflating with different inflation indexes. The hope, of course, is that students would learn to recognize and avoid these errors, but it would appeal to my temperament to try to present it "with a straight face", as it were.

If anyone else has any other suggestions, please offer them.

Wednesday, August 31, 2011

anti-trust law, incentives, and coordination problems

When two companies merge, they sometimes achieve efficiencies of scope or of scale, but where they were previously competitors, the merger may reduce competition. If the former effect dominates the latter, the merger may benefit consumers, but if the latter dominates the former, consumers will be harmed, and frequently the net harm to consumers is likely to exceed the net benefit to other parties (e.g. stockholders). Even where this last inequality does not hold, it is frequently within the ambit of regulators only to consider the effect on consumers, and this may be a defensible principle. Regulators may, however, have less knowledge about the likely effects of the merger than industry sources. It would be nice if there were a way to induce industry sources to reveal useful information to the regulators.

Mankiw pointed out a couple years ago that, where a merger is likely to benefit consumers, it will do so by lowering prices and raising quality for consumers, while where it hurts consumers, the opposite will be true. Regardless of how it plays out, competitors of the prospective merger partners will be helped or harmed exactly inversely to the effect on consumers, thus, in this narrow context at least, the anti-business instincts of certain populists are actually well-justified; if competing businesses are lobbying in favor of a merger, Mankiw suggested, block it, while if they lobby against it, approve it. He noted, though, that it is necessary to keep such a policy quiet; if the companies know their lobbying actions have these perverse effects, they will no longer lobby in a way that reveals the relevant information. The policy is not, in this sense, incentive compatible.

On the other hand, what just occurred to me is that one might well be able to, openly and publicly, follow the evolution of the stock prices of competitors that are publicly traded. A potential shareholder in a competitor to the prospective merger partners will wish for the regulator to see a drop in share prices on the announcement of the potential merger when the merger would in fact benefit the company, but he still finds it in his own interest to buy ahead of other potential shareholders; similarly, if he would like regulators to see an increase, he may still wish to sell before others do. If all buyers and sellers in a particular stock could form a cartel, they would jointly find an advantage in acting to confuse the regulator; what they might narrowly view as a "tragedy of the commons" may in fact serve, in this case, to enhance the public good.

While there is a tendency to think of coordination problems as a bad thing, in fact they are frequently quite useful, if usually in combating the effects of other coordination problems (or private information problems). Most of forensic accounting, in fact, involves asking different parties to keep track of essentially redundant information; while a single agent might be able to forge all of its own records in a consistent way, getting all of its business associates to forge their records in the same way is more difficult, such that accountants can subpoena everyone's records and find inconsistency in the combined dataset. Indeed, perhaps the most famous illustration of a coordination problem, with the possible exception of the aforementioned tragedy of the commons, is the prisoner's dilemma, in which a mechanism designer has, according to the usual story, explicitly designed the coordination problem in order to turn the agents against themselves.

In the case of two agents, cooperation is more likely to obtain, especially where they know each other, than it is with multiple, anonymous agents; "incentive compatibility" as it is usually treated in the literature requires only that each agent find it unprofitable to unilaterally deviate from the prospective equilibrium supposing that nobody else does so. In actual practice, it seems likely that this condition is insufficient as long as the sets of agents required to coordinate to block such an outcome are both small and in a variety of senses are known to each other in such a way that they can solve their internal coordination problems — possibly creating what manifests itself as a coordination problem on a larger scale.

Friday, August 26, 2011

stimulative corporate tax policy

Last month I turned 35, and noted that I am now old enough to be President. I was asked my platform, and gave sarcastic responses. I do have a couple of ideas of things I think would be worth trying to stimulate* the economy, though; one of these is price-level targeting, which I've mentioned before. The others relate to corporate taxes.

The first has been badly garbled by blogger, but involves increasing corporate tax rates to 40%, but with a credible commitment to then lower rates by 2 percentage points each year until they drop to 30%.  (How to make this credible is left unspecified.)  The idea, though, is to induce firms to shift profits from the present into the future; if they can find a way to move $10,000 of pre-tax profit from next year to the year after that, they save $200 in taxes.  The next proposals give the firm a way to do that.

The second idea is to allow companies to expense all investment for two years, and half of it in the third. Approximately, if a company spends $10,000 on a piece of equipment that will last ten years, it lists $1,000 as an expense each year for ten years; that is what it subtracts from revenue to calculate taxable profit. I'm suggesting that we allow companies to front-load the deductions; if you spend $10,000 on a piece of equipment this year, subtract the whole thing from your revenues. (You would then carry it at 0 value on the books; you would not deduct $1,000 per year in future years, and, if you sell it, the amount for which you sell it would be taxable revenue.)

The third idea is to cap each company's payroll tax contributions at, say, 80% of its level from some recent previous year; this provision, too, would hold for a few years. A company that is already paying less than that would, of course, not be affected; for a company that is affected, decisions related to changing payroll are now changed by the fact that another $1 in salary paid does not cost the company $1.075 (or thereabouts). Creating a new $30,000 job costs the company $2,250 less per year than it would have otherwise; creating a new $40,000 job costs $3,000 less; laying off a $40,000 worker will save $3,000 less than it otherwise would have. Further, again because of the declining corporate tax rate, a company that thinks it will want to hire two or three years from now has a bit more incentive to bump up its payrolls now.

Finally, while I view as a feature that the third idea encourages companies to create higher-paying jobs to some extent, we can also allow the wages of any hourly worker to be deducted as though they were $10 per hour; e.g. if a company pays an employee $7.50 for 20 hours, it lists an expense of $200 on its taxes rather than $150. This creates a little bit more incentive to create a job at the low end of the scale rather than not to create it, but it also means that raising the employee's pay to $9 per hour will not benefit the company through a lower corporate tax. This one is accordingly a bit dangerous, and is kind of intended to offset any harm done to especially low-skilled worker by recent increases in the minimum wage.

I should note that the second and third ideas in particular are not entirely my own idea; expensing investments is naturally what one would do in a more consumption-based tax system, and Mankiw has recently noted that temporary pro-investment tax provisions, such as an investment tax credit, would in many ways lower the effective interest rate that is used in investment decisions, thus giving a way around the zero-bound on nominal interest rates. Singapore actually uses a countercyclical employment payroll tax, reducing the employer's share of payroll taxes when unemployment is high (to reduce the cost of hiring) and raising it when unemployment comes down (to raise the necessary funds over the long-term). I've essentially taken the marginal rates all the way to 0, but with the 80% offset designed to mimic proposals by people who seek to raise more nearly the amount of revenue that is raised by current levels of payroll taxes.

* I want to emphasize that "stimulate" is supposed to indicate an emphasis on short-term; I'm not discussing here ideas that would focus on creating a good climate in the long-run for sustained growth. The ideas presented, though, should not do great violence to long-run growth, either, and are partly constructed with them in mind.

Wednesday, July 27, 2011

production of capital goods

The whole philosophy of macroeconomics since Keynes more or less invented it has been to aggregate variables like "consumption" and "investment" and ignore differences between different kinds of consumption, as well as structural details about what specific goods are used to produce what other specific goods. The approach has its uses, and macroeconomists have shed some useful light on the workings of the economy this way, but even on what one would think of as a macro basis it seems likely that some details will make differences in some circumstances. One of the big changes over the past generation or two that might be important is the changing nature of the production of capital goods.

Forty years ago, "capital" meant heavy machinery — factory equipment, earth movers, that sort of thing. People who produced capital goods were to a large extent machinists and factory workers. Today a lot of "capital" is software. Software is "nonrival", meaning that producing software for 1,000,000 customers is not largely more expensive than producing the same software for 10 customers; it is also the case that a lot of software production builds on previous versions of similar software. "Fixed investment" has been supporting the recovery to a greater extent than in previous recoveries, but in 2011 that means more software and fewer tools than it would have 30 years ago.

Institutional capital, which I'm largely leaving out, may also be more important now than it was 40 years ago; more of the labor force consists of people for whom an important part of their value to their current employer is detailed knowledge of coworkers, workplace culture, and procedures than I think was the case forty years ago. This is especially true in the production of modern forms of capital as compared to production of older forms of capital; engineers and computer programmers working on projects too big for any one of them to complete alone are harder to replace with other experienced employees with the same generic training than is the case for machinists. (This is not an absolute truth, but is broadly the case; certainly any sizable company will benefit from employees who know the idiosyncrasies of that company, or even of its particular workplaces. I believe it to be more true, in general, of engineering kinds of work than of machining or factory work.) Related human capital is also likely to be more important for more modern forms of capital than for older forms.

Confident answers are not in the purview of this blog, but it seems reasonably likely to me that this contains a partial explanation for the slow recovery of employment after recessions that has been increasingly witnessed over the past 25 years. When demand shows its first signs of renewal firms may turn first toward replacing their capital, whose prices are more likely to drop than are wages; the producers of capital themselves need not hire a lot of new workers nor raise the prices of their products a great deal until demand is quite substantial, and (especially in times of uncertainty) may be slow to increase employment too quickly because of the investment this would require in institutional capital as well.

Friday, July 15, 2011

price changes

I'm interested in markets, and one of the complicated things about markets is that they involve people. Netflix has gotten itself some flack recently for revamping its price structure, breaking apart (as I understand it) for sale separately (and without a joint discount) two services that were previously bundled, such that the total price of the original package, for those wishing to replicate it, has gone from $10 per month to $16 per month. One of the comments I saw was "this is too big a price change to implement all at once"; if we take this complaint at face value, it might have created less of a stir if they had offered a $3.50 discount on the bundle for a period of time. This is similar to something I heard at the annual meeting of my cooperative apartment; we're being hit with some big expenses in a year or two, so the board decided to increase our monthly maintenance payments this past year so that the increase next year won't be one big jump.

I'm curious as to what kinds of price changes strike people as "unfair" and what kinds don't. Stock prices change quite frequently, but the buyers and sellers are very dispersed and anonymous; I think people have less of an emotional "fairness" response to stock price moves than to other kinds. Gold, at least as traded on financial markets, is similar; so, though, is oil. Most people purchase their oil distillates, though, from recognizable brands, and even though they're usually mercenary about it themselves — most people, choosing between Exxon and BP, will go with whichever is cheaper on the given day — seem to object to higher prices than they're used to. This is likely also a function of the fact that people build their habits around consuming oil distillates at a constant rate, and don't like responding to prices; demand elasticity of gasoline, especially in the short run, is very low (which is precisely why the price is sometimes so volatile).

If the shop on the corner raises its prices, my understanding is that people tend to regard this more favorably if the retailer's costs have recently gone up than if it's simply an attempt to ration rising demand in the face of potential shortages. (It's worth noting in this context, though, that part of Netflix's decision seems to have been related to costs.)

And, as suggested at the beginning, it may be that increases of a certain size produce a certain amount of sympathy, especially in the face of rising costs, but that there are certain breakpoints where the customer would respond less viscerally if the change were phased in. What interests me in particular here is to what extent it's an abrupt change in expectations rather than an abrupt change in prices that creates the angst. If Netflix had announced this change 18 months ago, would it have produced as much complaint then as it is now, or as much complaint now as it is, or would it have spread it out or even reduced it? If the old rates had been (credibly) portrayed, as soon as the bundled items were being sold together, as a special, trial offer, would the new price structure have been more readily accepted? (If you give away an item for free for two months, any increase in price will exceed 60%, but would presumably be more accepted; there would be an expectation that this was a limited-time offer.) I note in this context that O'Hare airport some years back raised its parking rates by announcing, at the beginning of the holiday season, that it was offering "special holiday rates" that equalled the rates in October; they actually raised the rates at the beginning of January by allowing the "special rates" to expire.

Another anecdote: about ten years ago, I was a regular in a sandwich shop, and recognized as such; they increased the price of a sandwich by 10 cents at one point, but comped me a free sandwich when they made the change. I imagine them imagining me thinking, "They're nice people and they like me, so I understand that they have to raise their prices once in a while."

I'm not offering grand theories, but my speculative observations are that upset increases when
  • demand for the good is inelastic
  • price increases result from cost increases, rather than shortages
  • price increases are "big"
  • price increases are unexpected
  • markets are less anonymous.

Wednesday, July 6, 2011

informational complementarities of production

I've had recent occasion to discover that there are certain brands of baby-good manufacturers that produce a number of different items for babies — my wife could tell you the names of some of the brands. These different items often seem to have relatively little in common other than being small consumer-grade manufactured items; it's not clear where a single company would have a production efficiency in producing this set of goods and why e.g. Black and Decker couldn't just as logically produce a stroller. (Perhaps there's some valuable cross-product knowledge about the range of shapes and sizes of babies bodies.) What seems, based only on anecdotal evidence and my own speculation, to drive this more is the reputational complementarity; it would, in fact, not surprise me to learn that the items are produced by third parties and rebranded. A new mother who studies up on products and develops a sense that a brand of one product is good can transfer that impression to other products under the same brand name.

Friday, June 10, 2011

parimutuel betting

I was a bit shocked when I found out that, when betting on horses, if you make a bet when the odds are 7-2, and a bunch more people make that bet after you did, you can be stuck with a bet at 5-2 odds;* the odds quoted before bets close is not really a price at which an offer is being made, so much as what in certain finance contexts would be called an "indicative price", that provides a certain amount of information.

The biggest — by far — justification I can see for a parimutuel system is that it's the 1930's and nobody has ever heard of a computer; rather than keep track of who bet what at what price, you just keep track of how much was bet on Runs For Miles and take (most of) the money that was bet and prorate it. There's less bookkeeping to be done than if you're making bets on different terms with different gamblers, as is done with most Vegas sports betting, and there's no chance whatsoever that the house loses money; if it puts out 7-2 odds and gets a lot of bets, it doesn't just move the odds of future bets to 5-2 to try to attract bets to other horses, it moves the past bets to 5-2, in some sense devaluing their weight on the book.

The context in which stock exchanges provide an "indicative price" resembles horse gambling in that everyone is placing orders and they are being accumulated without any actual trades taking place; a single auction is ultimately held based on the orders accumulated. Some of the orders will be limit orders — "Buy at up to $45 per share" — and others will be market orders — "Buy at any price." When the auction actually goes off, a price is determined at which the amount bought will equal the amount sold; if you pick a particular price and there would be more buyers than sellers at that price, you can drop the price until some of the limit orders to buy becomes active and some of the limit orders to sell becomes inactive, decreasing the number of effective buy orders and increasing the number of effective sell orders. Before the auction has executed — while they're still be accumulated — an "indicative price" will be quoted, telling traders what price the auction would obtain if no more orders were placed (or, alternately, if all future orders were balanced at that price); if you place a limit order at that price, though, and it moves against you, your order won't be executed.

With the proliferation of computers to facilitate the process, I wonder whether potential horse gamblers would be interested in a system in which you could place limit orders. Current horse gamblers are presumably selected for people who don't loath the current system with a burning fire of rage, but I think at the very least that "let me out if it goes past 3-1" should not be confusing or offensive to people who are currently betting; they wouldn't be required to give a limit price, anyway. The house would still, by setting the price after the bets are in, make sure its books are always balanced, and in fact all bets on a given horse would still ultimately go off at the same odds, so the reporting system wouldn't be different. People who placed bets that ended up ineffective would have to have their money refunded, which could be a logistical nuisance. If it would attract any significant number of new betters, though, it seems like it could make horse gambling more popular. Which no doubt would be reason enough for some other people to oppose it.


* I'd be inclined to call the system "unfair" except that pretty much anything that's upfront about its unfairness when you're going in seems to me ipso facto not really that unfair. There are probably exceptions to this rule (of what seems to me to be fair or unfair), but they're very uncommon.

This makes it a lot like the Dow Jones Industrial Average.

Kind of like a terms-of-trade effect.

Thursday, March 24, 2011

yield-curve targeting

Scott Sumner's big idea is that the Federal Reserve should target expected values of nominal GNP. When he's been his most concrete, he suggests an automatic program to inject certain amounts of money through open market operations when futures on GNP drop below a certain level, and withdraw this liquidity when futures exceed the level. I've been thinking about this and related things for a while, and want to put forth an idea related to them.

First I want to discuss level targeting versus growth (which, in the case of prices, is called "inflation") targeting. I tend more toward the former camp because the time-consistency problems are less bad there; if, in response to 1% inflation in response to a stated 2% target, you continue to assert a 2% target, the market will continue to expect what it continues to expect; you can get self-fulfilling expectations. With a price level targeting system, targeting price levels along a path with 2% annual growth, if inflation comes in at 1% one year, you aim to make it up; markets can form better grounded expectations that, a few years from now, price levels will be around what you've promised, and to the extent that expectations are self-fulfilling will even help you get there rather than wandering off in persistent indifference to your stated policy.

Level targeting per se suffers from a different sort of time-consistency problem, though, or perhaps two different, though closely related, such problems. If prices move too far off the target path, the Fed may be under pressure to revise the target; the choice is then between insisting on remaining firm, possibly requiring deflation or high levels of inflation, or revising the target, in which case a lot of credibility goes out the window. (This working paper from 8 years ago titled "Tough Policies, Incredible Policies?" notes that imposing costs on oneself to generate credibility means one might incur those costs in an extreme event, making the situation worse; this is related.) On a related note, on adopting the regime, a target level has to be decided upon and announced; particularly early on, there will be speculation as to whether the Fed would, at some point, use whatever criteria it used to set the initial target path to set a revised target path.

What I support therefore lies somewhere between a level target and a growth target: revise the target level by some fraction of the deviation from the target path. If this fraction is 1, you have growth targeting; if it's 0, you have level targeting. If you use between 0.03 and 0.05 per quarter — a decay time of 5–8 years — your target is pretty rigid over the course of a year or two of ordinary noise, such that it invites self-fulfilling forces in its favor, but large deviations are partially accommodated, making it more credible that the Fed will continue to maintain the policy in the face of a crisis — abandoning (slightly) its target, but in a pre-determined way, such that markets can form expectations, and those should generally (again) be stabilizing. Further, this policy could be adopted today and would spit out the same target level as if it had been in use for 25 years; not only does this mean that "revising" the target, according to the same criteria, several years down the road would mean no revision, but it means that the Fed builds credibility for the regime more quickly, as it has no less reason to revise its target soon after adoption than it would in midstream, and can demonstrate that the target level wasn't simply chosen to be easier, for political reasons, to hit.

I like the idea of targeting nominal GNP better than targeting prices, partly because it feels like a change in real GDP should be met by an accommodating change in the level of inflation targeted — that is to say, the direction at least is right, that it makes sense to lower the inflation target when the economy is growing quickly and to run monetary policy that is looser than a strict inflation target would give when the economy is weak. I also like it because I think it can be measured better; price targeting requires deciding which prices to target, and measuring those prices requires hedonic adjustments and the like, while nominal GNP, at least in principle, is simple: just add up dollars for everything, regardless of how the goods or services for which they're exchanged have changed from the previous period. (I don't care as much between GNP and GDP, and seem to have gotten sloppy about which I use. They're close — at least as close as are the fed funds rate and the T-bill rate, which I'll implicitly conflate shortly — and I expect Scott has better reasons for supporting the one he supports than I would have for either.)

Over the short run, I would rather maintain the practice of targeting short-term interest rates rather than the quantity of money (any money); the difference between the two policies amounts to a difference in accommodation of short-term variations in liquidity demand as, for example, pay checks clear. It seems likely to me that putting this variation in the quantity, rather than price, of liquidity will impose less volatility on the real economy. I will, at the moment, simply ignore any problem this creates when interest rates are 0. This is part of the privilege of having a blog.

What I'm looking at, then, is a regime of calculating how far GNP is from a target that largely grows at a constant rate, but will absorb deviations over the course of a business cycle, and targeting an interest rate based on the deviation from the trend and expectations of growth in the near future. If growth has been too low lately, we lower interest rates; similarly, if it's expected to be too low in the near future, we lower interest rates.

What interests me — what triggered this post — is that, once you're in a credible policy regime of setting short-term rates based on recent deviations from your targeted long-term growth rate, you don't need to create a market for GNP futures; long-term interest rates are expected future short-run interest rates, which will depend on expected GNP growth. At this point I simply make short-term interest rates a function of the current deviation of GNP from its target and of, say, ten-year treasuries.

I'm not sure yet what relationship is required between the parameters to ensure determinacy, or to make this most nearly result in actually targeting expected GNP (at some given distance in the future), but I had an idea a few years ago — before I started taking macroeconomics classes — that perhaps the FOMC should direct the open market desk to target a certain steepness for the yield curve, just on the grounds that, hey, if long-term interest rates move down, we probably want short-run rates lower, too. I don't know that "1" is the right coefficient to put on any duration of interest rate (or a weighted average of such interest rates); perhaps it would not be. In any case, I have some aspirations at some point to try to write this down and solve for interest rates in terms of parameters and GNP expectations, but that should probably wait until the summer.

Friday, March 18, 2011

mutual underwriting

A particularly weird idea that's popped up in my head in the last month is on some level an extension of the old idea of a mutual insurance company, wherein policyholders are also residual owners of the company; in caricature, everyone pays in somewhat more at the beginning of say a six month period than they expect to lose and everyone gets back a portion of what's left after paying losses incurred during the period. If the overall risk level was higher than initially estimated, people may not get back the rebate they were hoping for, but if the overall risk level is lower, people end up effectively paying a lower premium for the period in which they were covered. They thereby insure their risk by spreading it among their fellow policyholders, remaining exposed to unexpected levels of overall risk, but they don't face the problem of perhaps believing that the insurance company is being overly conservative in setting its premiums — if that's true, the policyholders will ultimately get back the difference.

These require, in some sense, less absolute underwriting, but still require relative underwriting; if 100 homeowners in identical homes are buying insurance, that's easy, but if 12 of them have propane tanks next to their houses and the other 88 don't, charging everyone the same premium doesn't seem as fair. One solution here is to subdivide the groups: let the 88 buy insurance from each other, and the 12 buy insurance from each other, without the cross-subsidy. Each time we sub-divide, though, the insurance becomes less useful — I don't have the law of large numbers working for me terribly effectively when there are only 12 of us, since the whole point of buying insurance was not to be exposed to a risk of large loss, and one twelfth of a house is a large loss — and, since any two houses are different, at the very least in location (e.g. distance from fire stations), this creates a problem in which someone has to decide which houses are similar enough that it is better to throw them in the same pool, and which differences are sufficiently salient that they should be in different pools, even at the cost of a higher variance of outcomes for the policyholders.*

The idea that popped into my head is essentially that we let groups decide on their own which groups to join. Obviously simply saying, "Here are two pools: the safe pool, and the risky pool. Which do you want to join?" isn't going to work — the safe people need to be able to exclude the risky people in some fashion. One idea is to cap the number of people in each group, and let members of oversubscribed groups vote on which of the other (attempted) subscribers to keep; this gets close to the self-underwriting flavor that I was looking for when I thought about it. The same sort of arrangement could apply to health insurance, or even to mortgage lending (in something like a credit union), though that introduces other complications.

The details of the voting would be interesting, though; does everyone in the pool list their favorite 99 co-poolers and the top 100 get to be in the pool? Or perhaps everyone gets to vote for as many as they like, and the top 100 are in. Or maybe rank applicants in order, and give a certain number of points for first-place votes, etc. Or perhaps we should do something recursive; what if the group of the top 100 applicants, as measured by the votes of all applicants, differs from the group of the top 100 applicants as those 100 applicants themselves vote (i.e. excluding the votes of the rejected applicants). What we'd really like is some sort of stable outcome in which everyone is in a group, and nobody would prefer to be in a different group that would be eager to swap that person in for some other current member of the group. Can we get that?

Well, the answer turns out to be "no". Imagine 4 people: Alice, Bob, Carol, and Doug. They are to be divided into two groups of two people. Alice prefers to be with Bob, Bob with Carol, Carol with Alice; Doug is the last choice of all three of them. Now consider a prospective grouping; the person who is grouped with Doug is the first choice of one of the other two, and can go to them and say, "hey, let me join your group." No matter how the four people are divided into groups, there is always someone from each group who would rather be with each other than with their current group; every possible grouping is, in this sense, unstable.

For large numbers of people, with large groups, and with highly correlated preferences — that is to say, if people largely agree on who are the safest risks — then the probability of this being a problem in actual practice get very small very quickly. You could probably use just about any system you want to get groups down to 105, let them whittle it down to 100, and you would almost always have a stable alignment. This theoretical curiosity, then, isn't the biggest problem with the idea, though it is, I think, one of the more interesting.

The bigger practical problem is that it requires, at least in its most naive formulations, that everyone have an opinion about everyone else's level of riskiness, and an easy means of conveying it. I can imagine ways of getting around it, but on some level underwriting is a service provided by the insurance companies, who are presumably more or less expert at it; I would rather let Geico figure out which of my neighbors are the better risks, and allow me to put my efforts toward blogging about interesting but largely impractical ideas.


* In practice, I imagine everyone is thrown in the same pool with some policyholders asked to pay e.g. 1.5 times as much as others; that clearly still leaves an underwriting problem, and leads less naturally to the idea I'm trying to present.

preserving corporate liquidity in a crisis

Update: Apparently something like this has existed for asset-backed loans.

Buffett's letter got me thinking a bit more about liquidity and solvency, and I've slightly-more-baked an idea I had two years ago, in particular to the point where it now contains a policy prescription.

In another post here I mentioned maturity transformation, noting reasons corporations are induced to borrow short-term for long-term needs. The problem is that, if you need to roll over loans every day or two, a market event can put you in default even if you're unambiguously solvent. There's a level on which the obvious response to this is "don't do that", but it seems that the genuine benefit of being able to borrow short for some of your long-term money is of large value, and that the social cost of a large, solvent company having a lot of short-maturity debt in a financial crisis is a great deal less than the official penalty, which is bankruptcy. Aside from this is the time-consistency problem; I would prefer a set of rules that our regulators would be more willing to actually stick with in a crisis, rather than leaving harsh enough ex post consequences that they lack their intended ex ante effect.

At the same time, my leanings are still libertarian, and I prefer that privately negotiated contracts be taken seriously. I also prefer fairly incremental changes to formal rules. At the moment, the regulations relating to publicly traded debt securities are easier for instruments with maturities of no more than 270 days; I'm proposing that this include instruments with maturities of up to 450 days, provided that such instruments
  1. are callable within 270 days, and
  2. have a yield from the call date to the maturity date that is substantially — say 12 percentage points per year — above the yield to call.
The idea is to allow firms to write into their bonds that, in the event of extreme crisis, these private bond-holders, rather than the public, would be essentially providing emergency funding to the company, but under penalty terms of such a nature that the company will seek to avoid abusing this flexibility, or using it when it is not under simultaneous pressure to borrow at longer durations elsewhere.

If this really operates as intended, it may benefit money-market fund holders who own (beneficially) these bonds; if the company is solvent, this is results ultimately in some extra yield, and even for those who are seeking to cash out (perhaps because their own need for liquidity has risen at the same time as everyone else's), the values of the bonds are likely not to fall very much, and may even rise — as I envision it, the primary effect of this sort of clause is to solve a coordination problem, in which all lenders are profitable as long as the firm doesn't need to borrow money it can't borrow in the short term, but in which the last ones out lose if it becomes a race to the exit. The firm, facing a suddenly high cost of funds, would be induced to issue a 3 or 5 year bond a month or two later, whenever it can do so at interest rates even a couple of points above what it might hope to pay by waiting longer, because of the penalty rate on the commercial paper, which it would rather retire as soon as it can.

This may have an adverse effect on the credit profile of these instruments. If a firm is in actual trouble — its flagship product turns out to kill its users or something — it seems that the holders of these instruments will almost certainly lose money, though it seems likely that in a lot of these situations they would be likely to lose that money anyway.* I expect that, if these instruments started to appear, investors would quickly start to get used to them and would price the credit risk reasonably, rather than simply refusing to buy them at any price.

While it is certainly possible that these sorts of instruments would lead borrowers to skew more of their borrowing toward the short end and, more generally, to take fewer steps than perhaps they do now to take other steps to insure their access to liquidity, it seems likely to me that any systematic mispricing of these instruments is likely to make them less attractive to borrowers than they should be; these would be a cheaper mechanism of dealing with liquidity concerns only when they truly do less overall harm than other options available to borrowers. They create a new tool, ultimately, for doing maturity transformation, and solving some market failures associated with that at the present time.

* I have mixed feelings about the extent to which I think short versus long duration lenders should have effectively different seniority in a bankruptcy claim — short-duration lenders, in principle, are in a better situation to see problems coming, and in that sense are more at fault than long-duration lenders, but the problems often develop over long periods of time, and long-term lenders might be in a better position, through the imposition of covenants for example, than short-term lenders in imposing discipline to make sure the borrowers don't get into that trouble in the first place.

Saturday, February 26, 2011

profits and regulatory regimes

Warren Buffett's annual letter to the shareholders of Berkshire Hathaway is out, and includes the following paragraph, in the discussion of the regulated utilities Berkshire Hathaway owns:
In its electric business, MidAmerican has a comparable record [to those of gas pipelines discussed in the previous paragraph]. Iowa rates have not increased since we purchased our operation there in 1999. During the same period, the other major electric utility in the state has raised prices more than 70% and now has rates far above ours. In certain metropolitan areas in which the two utilities operate side by side, electric bills of our customers run far below those of their neighbors. I am told that comparable houses sell at higher prices in these cities if they are located in our service area.
I presume that the prices charged by regulated utilities with different "service areas" are set by a regulator, likely based on appeals by the utility to raise rates once in a while. "Cost-plus" contracts are not uncommon, even in purely private-market agreements, but they are pretty much ubiquitous in utility regulation, where regulators will set prices to insure some predetermined return to investment on capital by the utility. The reason for its use in both circumstances is that comparatively easy to negotiate, especially if the seller (i.e. the agent initially bearing this cost) is somewhat risk averse.

It also has terrible incentive effects, and, while I think it appeals to some people's sense of fairness, it grates terribly against mine. I have thought that if I were regulating utilities in New York, I would tell them I'm going to tie their prices to the costs of regulated utilities in California, Florida, Montana — not necessarily to say that the cost in New York should be the same as elsewhere in the country (I would allow a multiplicative factor, and perhaps an additive one, rather than simply use the average cost elsewhere), but if the costs go up uniquely in New York, I'm inclined to think the New York utilities are doing something wrong, and if they drop uniquely in New York, I think they're doing something right. So, Con Ed, cut your costs, and you get to keep the bulk of the savings as added profits; if your costs get out of line, don't call me in off the golf course to bail you out of your mess.* This encourages cost reduction, but it also frankly seems fairer to me than the current system where any cost controls are a function of the regulator micromanaging the utility or being reluctant for political reasons to raise prices too quickly.

Even within Iowa, rural areas surely cost more per kilowatt hour to serve than urban areas, and there may be factors between neighborhoods that affect costs, but I wouldn't expect any of these to change by a differential factor of 1.7 over the course of a decade — as Warren Buffett implies, it seems reasonable to infer that at least some, and probably most, of that change is attributable to factors under the control of the respective managements of the two utilities. It would make sense to me for the regulator to raise prices by 3% for MidAmerican, cut them by 7% for the other utility — assuming they're more than 20% different or so in apparently similar neighborhoods — and say to the higher-cost utility, "you figure it out."

Ideally, this is how a competitive market would work. There are reasons to believe that this sort of market can't be made competitive per se, but the textbook ideal of the competitive market gives us an ideal to aspire to when we're forced to step in, even to the point of setting prices for natural monopolies. In the ideal competitive market, a company is given the price at which it can sell its output and the prices at which it can buy its inputs, can enter and leave the market easily, and gets to produce as much as it can produce profitably. If it can't produce anything profitably, it drops out, leaving the market to firms that can; if it can produce profitably, those profits represent exactly the extent to which the firm is better than the "marginal" firm — one that's right on the edge between entering the market or not, or leaving the market or not — at doing what it's doing. This ideal is, of course, not going to be exactly met, especially in the case of regulated utilities (for which free entry and exit isn't going to be anywhere near true), and it would be a good idea at least to figure out how a bankruptcy would be handled if a company is run into the ground, but I think this sort of approach would yield fewer problems than the regulatory system we have now.


* Figuratively. I don't golf.

Wednesday, February 2, 2011

endogenous depreciation with specific capital

This is even less developed than most of the thoughts I post here, but I've been thinking recently about models of specific capital — macroeconomic models tend to deal with aggregates, so that a factory that produces cars is the same as a stable of machines that produce houses, provided the factory and the machines cost the same amount, and Hayek in particular complained about the importance of limitations on repurposing of capital. This came into my head a couple of weeks ago when my macro professor noted that downturns in the economy tend to be short and sharp, with expansions longer and more gentle, and I noted to myself that specific capital with shocks to demand would produce this pattern. What occurred to me yesterday was that, if different forms of capital have different rates of depreciation, then the aggregate depreciation rate would tend to increase with uncertainty in future demand, i.e. that if you're buying/producing a capital good when you're not sure what demand will look like in 20 years, capital that will depreciate in 20 years looks more attractive compared to capital that will depreciate in 50 years than if there isn't that uncertainty.

Monday, January 3, 2011

margins and gerrymanders

One of the fundamental principles of microeconomics that professors attempt to impart to introductory economics students is that what is most often economically relevant is what is taking place on the "margin" — what responds to small changes. For example, if the supply of milk goes down, people who will never buy milk, or will buy exactly half a gallon a week regardless of the price, aren't relevant to the analysis of how much the price will change; what matters is the "marginal consumer", the person who might decrease milk consumption if price goes above some relevant level.

Another tradition of economists is to apply economics to subjects that often aren't considered economic, especially if they are in the social sciences. What I want to discuss here is gerrymandering, but this is an economics blog, so I wrote the first paragraph as an excuse to address that topic.

Consider the task of a state legislature — taken as a unitary agent — that has no interest other than supporting the party of its majority as it draws Congressional districts. (One might hope our elected representatives would be more high-minded; one might expect they would be less high-minded, looking to disproportionately protect incumbents of both parties with the expectation that doing so with Congressional districts will facilitate coordination on doing so with state legislative districts. Abstract away from that.) There are traditionally two different approaches to take: try to put the same number of people from your party in each district, in an attempt to sweep the districts, or create a few districts that lean heavily to the other party, such that your party has a safer majority in a majority of districts. On some level there's a continuum here; how many districts do you give to the other party? In Massachusetts, where all 10 representatives are currently Democrats, the correct answer is surely 0 (for the Democrats). In other states, it might make sense to increase that number; each district conceded will, of course, reduce by 1 the number of districts your party could win, but may increase the number of districts will win.

The correct answer, in general, I think, is that you figure out what portion of the electorate in each precinct would be voting for your party when the House of Representatives as a whole is likely to come out about 218-217. In Massachusetts, where Democrats went 10-0 in a heavily Republican year, no district is likely to be vulnerable under any but the most extreme circumstances; a blindly partisan Democrat drawing Massachusetts's 9 districts for 2012 would see little value in conceding a district to the Republicans to make sure to have 8, instead of 7, votes in a tiny minority if the House moves further toward the Republicans. In most states, if there were no other barriers to arbitrary gerrymandering, it would seem like a similar result would obtain, if less dramatically; if one party has control of the redistricting, it is likely that that is the party that would see the most votes in an election year that was close nationally, in which case it would prefer to spread out its supporters in an attempt, for example, to give all 13 of North Carolina's house seats to the Republicans when the rest of the country is breaking 217-205 for the Democrats.*

In Pennsylvania, where Republicans control redistricting and the state leans slightly more toward Democrats than is the case nationally, it would make sense for (again, single-minded, otherwise-unconstrained) Republicans to create 2 safe Democrat seats so that the other 16 seats would give Republicans close to 10 point margins in a year like 2010, with perhaps 1 to 3 point margins in a year when control of the House was on the line.

In most states, redistricting is not entirely in the control of one party, and even where it is, there is often an ongoing understanding that incumbents from each party are largely to be protected. Voting patterns shift over the course of ten years, as voters move, die or turn 18, or simply change their habits; people drawing district lines may have a form of loss-aversion, or perhaps simply a fear of embarrassment, that would make them reluctant to draw districts that could easily give the entire congressional delegation to the other party at some point in the next decade. There are also factors other than party that play into voting patterns, incumbency not least among them, that will have an impact on this analysis. In addition, it is at least nice to believe that too aggressive gerrymandering would be punished by voters, though I expect in practice that is much smaller than the other constraints.

* In North Carolina, Republicans control the redistricting process, and while North Carolina voted for Obama in 2008, Republican House candidates got a bit over 54% of the two-party vote in 2010, while the figure for the median house district nationwide was around 53%; similarly, Obama won North Carolina with a narrower margin than any other state. North Carolina's current districts were gerrymandered by Democrats; of North Carolina's current 13 House seats, 5 were won comfortably (a margin of at least 7%) by Republicans, 8 by Democrats. (One seat was within 1%, and was won by the Republican.) North Carolina is, however, subject to the Voting Rights Act, and, taking account of that and other constraints, it is likely that Republicans will draw a few heavily Democrat-voting--racial-minority districts and about 8 secure Republican districts.

This year Republicans won more votes than Democrats in House races in Pennsylvania, but only by about 1% of the votes cast.