Wednesday, January 20, 2010

10-Q

The SEC should outlaw quarterly filings, on the grounds that they drive too much of a short-term mindset.

Friday, January 15, 2010

TTL cash

There has been some recent move toward creating a bankruptcy chapter better constructed to handle financial institutions; Luigi Zingales was talking about such things in October of 2008 (search for "Bebchuk" — note that his proposal makes legacy counterparties senior to bondholders, who are nominally pari passu), but there now seems to finally be an interest in this in Congress. One idea I've been batting around in my head for perhaps a year now, at least as a tool to help with these things, is the creation of a new kind of credit that the government could issue, which I call TTL cash; I know I've shared it with my brother, but I don't think I've mentioned it here.

The idea is that when a highly-connected company (AIG) gets into trouble, the government lets it go bankrupt, but any creditor who is thereby impaired is given $1 in TTL cash for every dollar the creditor lost to the bankruptcy. The TTL cash is essentially nontransferable (and thus useless) except in bankruptcy, where, in the idea's simplest form, the government redeems it for actual money. The government has not bailed out AIG, or its creditors, but it has bailed out the creditors of AIG's creditors, insofar as they are impaired by AIG's failure; the chain reaction that regulators are eager to avoid is arrested.

The slightly more general case would allow different levels of TTL cash, each of which is redeemed for the next one down; the government could decide instead to bail out the creditors of the creditors of the creditors by issuing level 2 TTL cash, redeemable in bankruptcy for level 1 TTL cash, redeemable in bankruptcy for the real thing. "TTL" stands for "time-to-live", a term used in IP, the internet protocol; each time a router forwards an IP packet to another router it decrements a TTL counter, so that if some routing error causes the packet to wander off in the wrong direction or go around in circles, it eventually gets dropped rather than continuing to be passed around.

This is, of course, still a bailout, but on a practical level the moral hazard issues are much reduced from a standard bailout; everyone is responsible for assessing the credit quality of their debtors, at least to a significant extent. One of the major practical strengths of a decentralized economy vis-à-vis a centralized one is that it recognizes the information-handling limits of agents and asks them largely to make decisions based only on local information. If you lend entities money, or even just enter into contracts with them, you have to know something about their financial condition and their other dealings that affect it; if you have to know everything about their business, including everything about their potential creditors; capping this two levels down under certain circumstances seems to me a reasonable moral hazard price to pay for the benefits of a distributed system.

Wednesday, September 30, 2009

clever people

Robert Shiller has long been a fan of increasing the "completeness" of markets, creating more and more derivatives to require that "the market" be explicit about its beliefs; for example, in his 2000 book Irrational Exuberance, he proposed long-dated S&P dividend futures so as to require a market forecast of future dividends and their growth, after which one could see whether anyone really bought the implications of the levels of stock prices. In principle, there are all kinds of problems of both self-delusion and private information that could be solved by more and more derivatives.

Of course, in the last few years it has become clear that relatively simple derivatives, like MBS tranches (or even credit default swaps), seem to have befuddled people well smarter than the median. It's not that a large number of people who do understand the derivatives is necessarily needed for them to have their benign effect — up to some solvency limits, some people out there can arbitrage really bad mispricings and should keep things grossly in-line. The problem, though, is the amount of damage people seem to be able to do to themselves and then, transitively, to their creditors, or to people whose reputation may be tied up with theirs, that is on some level independent of the good these things do. Mortgage credit derivatives did create a market price for mortgage credit risk, and even did help spread and diversify it, and yet some people got themselves into a lot of trouble taking on too much risk that they didn't understand, and a lot of other people got in trouble.

It's possible the mispriced supersenior mortgage tranches would have been better priced with even more complete markets, but we will never have complete markets (and we wouldn't have the solvency to correct them if we did). I'm a fan of more complete markets in general, but expecting them to solve all of our problems strikes me a bit like some leftist beliefs in government; the problem, we're told, is that our problems haven't been dealt with by sufficiently clever people, and yet neither the government nor the financial markets are populated entirely, or even mostly, by particularly clever people. Mankind is not perfectable, whether by government or by market.

Tuesday, September 29, 2009

monopolies and consumer surplus

In a competitive market, each firm faces an inelastic demand curve; this means that the consumer surplus due to the existence of this firm is zero, so that having profitable firms stay in business and unprofitable ones exit passes a social cost-benefit analysis; the marginal benefit and cost of the firm's being in business are internalized to the firm. In a situation in which the firm has some monopoly power, however, the firm creates consumer surplus; from a social cost-benefit perspective, any profitable company produces net benefits, but so too may a somewhat unprofitable company, insofar as the consumers have fewer good places to turn if the company goes out of business.

Gerrymandering and equal-population districts

On constraining gerrymanderers with convexity requirements (pdf):
a gerrymanderer can always create equal sized convex constituencies that translate a margin of k voters into a margin of at least k constituency wins. Thus even with a small margin a majority party can win all constituencies. Moreover there always exists some population distribution such that all divisions into equal sized convex constituencies translate a margin of k voters into a margin of exactly k constituencies. Thus a convexity constraint can sometimes prevent a gerrymanderer from generating any wins for a minority party.
The current congressional districts in Iowa are a bit wrapped around each other; an initial districting proposal with more "compact" districts was replaced with this one, which had more nearly equal numbers of voters in each district as of the 2000 census. (The numbers in the initial plan were themselves so close that there's simply no way that 10 years of population movements wouldn't expand the variance by a large factor.) As long as we have single-member districts, and political minorities are going to be stuck with a single representative chosen by others in their district, it seems proper to me to favor a bit of homogeneity in each district, and "compactness" may function as a proxy for that. (The "population distribution such that all divisions into equal sized convex constituencies translate a margin of k voters into a margin of exactly k constituencies" is a theoretical curiosity, and is not likely in the world of geographical homophily in which we actually live.)

The absolute equality of district size is something of a misguided fetish. If you drew congressional districts largely at random with only a vague interest in keeping populations within about 50% of each other, I expect that congressional elections would play out similarly to districts that were more punctiliously equalized; if the former were able to be drawn with more homogeneity than the latter, they would leave most people better represented by "their" representative. In actual practice, of course, you would have Democrats drawing more populous Republican districts and vice versa; I think the best argument for keeping Congressional districts approximately the same size is that it places a constraint on gerrymandering. In addition to the homogeneity motive, "compactness" has the virtue of creating an — in some sense random — additional constraint on people who are likely, left to their own devices, to be worse than random. While this paper shows that convexity and equal populations aren't themselves sufficient constraints, I'm still tempted by the intuition that something like convexity, combined with other constraints — probably related to other political lines — would have a salutary effect on protecting us from a self-propagating political class. (That intuition wouldn't have expected the results of this paper, though. If I were precise in my statement, I could well be proved wrong.)

Saturday, July 25, 2009

smoothing data

One of the more interesting developments this summer in my own set of intellectual tics is that I've become increasingly enthusiastic about using atheoretic time series models as smoothing functions. If I want the "smoothed" value for a function at a particular time, I use the atheoretic model to predict its value a few periods out, and I use that predicted value as the smoothed value; particularly if I'm using a model that will always predict a constant value for the series once it's taken more than a couple periods out — e.g. an ARIMA(0,1,n) model will give the same prediction for n periods from now as for n+1, n+2, etc. — then any change in the "long-run" value represents "innovation", i.e. a surprise; a large rise in unemployment claims that results in very little change in the prediction is mostly not new economic news, but simply an expression of the short-term dynamics that were anticipated from previous data points. A model that does a good job of capturing these short-term dynamics should therefore result in predictions that change much less than the series itself does, and so provides a smoother series than the input.

For longer-term periods of time, there's probably some philosophical value to separating the short-term smoothed data to a prediction of where the data will be later; in particular, a model that did a very good job of predicting the data five years out would not be suitable for "smoothing" if I'm hoping to use the smoothed data to observe the business cycle. Measurement errors aside, each time scale will have fluctuations that are to be viewed as material and shorter-term "noise"; the real purpose of smoothing functions is to eliminate the noise, preserving as much of the "signal" as possible. As long as my projections only go a few periods out, I imagine that's what I'm doing; again, changes in my projection represent changes in inferred "signal", while fully anticipated changes in the data series are identified as being due to noise.

I have, in the past, looked at smoothing functions that require future data points to construct today's value; for example, if I look at data from 2008 and I wish to smooth stock-market prices, my smoothed function might start decreasing substantially in August or September because of the lower values it needs to achieve to match the data in November. If the point is to look at data as it comes in and identify trends early, though, that doesn't work so well; hence my preference for looking at purely backward-looking measures, even when I have forward data sets available to me. There are good economic contexts in which it makes sense to use all available data to try to extract noise from signal and seek dynamics that may not have been ascertainable in real-time; in those situations, atheoretic ARIMA models are probably not your best choice.

Wednesday, April 8, 2009

liquidity

In an economy with a single medium of exchange, "liquidity" of anything else represents the ability to convert that else into that medium. If I have two assets, one with a 5% bid-offer spread but very stable, the other more tightly bid but very likely to drop well over 5% before I'm looking to convert the asset to something else, it seems to me the former better serves any needs that tend to get lumped under the term "liquidity". This is how I model it mentally, is as a softish lower bound on the price for which I could sell it if the value of money to me (and its associated discount rate — i.e. where selling today might be better than selling at 1% more tomorrow) were to spike.

Stock price volatility tends to increase as stock prices drop, and this is usually couched in terms that suggest the latter causes the former, but it kind of makes more sense that it would run the other way: part of the value of stock is its "liquidity", i.e. one's ability to convert it, at little notice, into cash should that be necessary. When uncertainty increases, that, other things equal, is going to reduce the value of the stock.