Wednesday, September 30, 2009

clever people

Robert Shiller has long been a fan of increasing the "completeness" of markets, creating more and more derivatives to require that "the market" be explicit about its beliefs; for example, in his 2000 book Irrational Exuberance, he proposed long-dated S&P dividend futures so as to require a market forecast of future dividends and their growth, after which one could see whether anyone really bought the implications of the levels of stock prices. In principle, there are all kinds of problems of both self-delusion and private information that could be solved by more and more derivatives.

Of course, in the last few years it has become clear that relatively simple derivatives, like MBS tranches (or even credit default swaps), seem to have befuddled people well smarter than the median. It's not that a large number of people who do understand the derivatives is necessarily needed for them to have their benign effect — up to some solvency limits, some people out there can arbitrage really bad mispricings and should keep things grossly in-line. The problem, though, is the amount of damage people seem to be able to do to themselves and then, transitively, to their creditors, or to people whose reputation may be tied up with theirs, that is on some level independent of the good these things do. Mortgage credit derivatives did create a market price for mortgage credit risk, and even did help spread and diversify it, and yet some people got themselves into a lot of trouble taking on too much risk that they didn't understand, and a lot of other people got in trouble.

It's possible the mispriced supersenior mortgage tranches would have been better priced with even more complete markets, but we will never have complete markets (and we wouldn't have the solvency to correct them if we did). I'm a fan of more complete markets in general, but expecting them to solve all of our problems strikes me a bit like some leftist beliefs in government; the problem, we're told, is that our problems haven't been dealt with by sufficiently clever people, and yet neither the government nor the financial markets are populated entirely, or even mostly, by particularly clever people. Mankind is not perfectable, whether by government or by market.

Tuesday, September 29, 2009

monopolies and consumer surplus

In a competitive market, each firm faces an inelastic demand curve; this means that the consumer surplus due to the existence of this firm is zero, so that having profitable firms stay in business and unprofitable ones exit passes a social cost-benefit analysis; the marginal benefit and cost of the firm's being in business are internalized to the firm. In a situation in which the firm has some monopoly power, however, the firm creates consumer surplus; from a social cost-benefit perspective, any profitable company produces net benefits, but so too may a somewhat unprofitable company, insofar as the consumers have fewer good places to turn if the company goes out of business.

Gerrymandering and equal-population districts

On constraining gerrymanderers with convexity requirements (pdf):
a gerrymanderer can always create equal sized convex constituencies that translate a margin of k voters into a margin of at least k constituency wins. Thus even with a small margin a majority party can win all constituencies. Moreover there always exists some population distribution such that all divisions into equal sized convex constituencies translate a margin of k voters into a margin of exactly k constituencies. Thus a convexity constraint can sometimes prevent a gerrymanderer from generating any wins for a minority party.
The current congressional districts in Iowa are a bit wrapped around each other; an initial districting proposal with more "compact" districts was replaced with this one, which had more nearly equal numbers of voters in each district as of the 2000 census. (The numbers in the initial plan were themselves so close that there's simply no way that 10 years of population movements wouldn't expand the variance by a large factor.) As long as we have single-member districts, and political minorities are going to be stuck with a single representative chosen by others in their district, it seems proper to me to favor a bit of homogeneity in each district, and "compactness" may function as a proxy for that. (The "population distribution such that all divisions into equal sized convex constituencies translate a margin of k voters into a margin of exactly k constituencies" is a theoretical curiosity, and is not likely in the world of geographical homophily in which we actually live.)

The absolute equality of district size is something of a misguided fetish. If you drew congressional districts largely at random with only a vague interest in keeping populations within about 50% of each other, I expect that congressional elections would play out similarly to districts that were more punctiliously equalized; if the former were able to be drawn with more homogeneity than the latter, they would leave most people better represented by "their" representative. In actual practice, of course, you would have Democrats drawing more populous Republican districts and vice versa; I think the best argument for keeping Congressional districts approximately the same size is that it places a constraint on gerrymandering. In addition to the homogeneity motive, "compactness" has the virtue of creating an — in some sense random — additional constraint on people who are likely, left to their own devices, to be worse than random. While this paper shows that convexity and equal populations aren't themselves sufficient constraints, I'm still tempted by the intuition that something like convexity, combined with other constraints — probably related to other political lines — would have a salutary effect on protecting us from a self-propagating political class. (That intuition wouldn't have expected the results of this paper, though. If I were precise in my statement, I could well be proved wrong.)

Saturday, July 25, 2009

smoothing data

One of the more interesting developments this summer in my own set of intellectual tics is that I've become increasingly enthusiastic about using atheoretic time series models as smoothing functions. If I want the "smoothed" value for a function at a particular time, I use the atheoretic model to predict its value a few periods out, and I use that predicted value as the smoothed value; particularly if I'm using a model that will always predict a constant value for the series once it's taken more than a couple periods out — e.g. an ARIMA(0,1,n) model will give the same prediction for n periods from now as for n+1, n+2, etc. — then any change in the "long-run" value represents "innovation", i.e. a surprise; a large rise in unemployment claims that results in very little change in the prediction is mostly not new economic news, but simply an expression of the short-term dynamics that were anticipated from previous data points. A model that does a good job of capturing these short-term dynamics should therefore result in predictions that change much less than the series itself does, and so provides a smoother series than the input.

For longer-term periods of time, there's probably some philosophical value to separating the short-term smoothed data to a prediction of where the data will be later; in particular, a model that did a very good job of predicting the data five years out would not be suitable for "smoothing" if I'm hoping to use the smoothed data to observe the business cycle. Measurement errors aside, each time scale will have fluctuations that are to be viewed as material and shorter-term "noise"; the real purpose of smoothing functions is to eliminate the noise, preserving as much of the "signal" as possible. As long as my projections only go a few periods out, I imagine that's what I'm doing; again, changes in my projection represent changes in inferred "signal", while fully anticipated changes in the data series are identified as being due to noise.

I have, in the past, looked at smoothing functions that require future data points to construct today's value; for example, if I look at data from 2008 and I wish to smooth stock-market prices, my smoothed function might start decreasing substantially in August or September because of the lower values it needs to achieve to match the data in November. If the point is to look at data as it comes in and identify trends early, though, that doesn't work so well; hence my preference for looking at purely backward-looking measures, even when I have forward data sets available to me. There are good economic contexts in which it makes sense to use all available data to try to extract noise from signal and seek dynamics that may not have been ascertainable in real-time; in those situations, atheoretic ARIMA models are probably not your best choice.

Wednesday, April 8, 2009

liquidity

In an economy with a single medium of exchange, "liquidity" of anything else represents the ability to convert that else into that medium. If I have two assets, one with a 5% bid-offer spread but very stable, the other more tightly bid but very likely to drop well over 5% before I'm looking to convert the asset to something else, it seems to me the former better serves any needs that tend to get lumped under the term "liquidity". This is how I model it mentally, is as a softish lower bound on the price for which I could sell it if the value of money to me (and its associated discount rate — i.e. where selling today might be better than selling at 1% more tomorrow) were to spike.

Stock price volatility tends to increase as stock prices drop, and this is usually couched in terms that suggest the latter causes the former, but it kind of makes more sense that it would run the other way: part of the value of stock is its "liquidity", i.e. one's ability to convert it, at little notice, into cash should that be necessary. When uncertainty increases, that, other things equal, is going to reduce the value of the stock.

Wednesday, February 25, 2009

frames and prospects

When I think about "prospect theory" and behavioral economics in general, I tend mostly to think about loss-aversion and to be bemused by framing effects, but one of the other reliable findings is that agents who face losses become risk-seeking rather than risk-averse. Someone presented with a sure $25 or a coin-toss for $50 will usually take the bird in hand, but presented with losses of the same magnitude, people will frequently prefer the coin toss — any chance to maybe, possibly reduce the losses.

It occurs to me that this is something we observe on the macro level. Insurance companies do well for a while, rates start to come down, they start seeking out riskier insurees and refuse to give up share of unprofitable business, and eventually KABOOM! Financial institutions see risk premia or even just interest rates come down, they think they're entitled to higher returns, they go "yield-chasing" (buying up riskier assets) and increase leverage, and eventually KABOOM!

I wonder if there's a good institutional way to check this propensity for making a bad thing worse. Getting insolvent companies into bankruptcy before they can cause harm seems like a good start.

Wednesday, January 28, 2009

factors of production

Are slaves labor or capital?

I was just reading the introduction to Hayek's "The Pure Theory of Capital" — a book I fully expect not to finish — and he ultimately decides to use the term "capital" to mean "the total stock of the non-permanent factors of production". Presumably this includes some of "natural resources", insofar as those are depletable; I typically think of "capital" as something in which one can invest. (What "human capital" and "physical capital" have in common; each represents the devotion of some economic resources in the past to enhance production in the future. Natural resources I suppose represent a decision not to have depleted them faster in the past than we have. Perhaps Hayek's distinction makes sense.)

Of course, I can gear my slaves toward reproduction rather than production of something else, thereby enhancing my future stock of slaves. This isn't so different from human capital in general, though. In many ways, labor simply looks like a particular kind of capital. I wonder how fundamental the "factors of production" are, and how much they rely on ontology to be useful.

Monday, January 26, 2009

The Wealth and Debt of Nations

Consider an international economic system in which there is relatively little trade, and then it opens up to trade in goods and to mobility in one and only one factor of production. Assume the different nations have different total factor productivities, due to technology or institutions, but that such differences are factor-neutral. What one would see is that the mobile factor would tend to move toward productive nations, increasing (even further) the marginal product of the other factors in those countries, while reducing the comparative attractiveness of the now abundant factor in those countries. If domestic "supply" of factors is at all elastic, the domestic supply of the mobile factor should decrease as its supply from foreigners surges.

I just read a snippet suggesting that it is inappropriate or confusing that the wealthiest nation on earth should have become (based on net foreign investment) a huge debtor nation. That doesn't strike me as a paradox; it just tells me that capital is more mobile than labor.

Sunday, January 4, 2009

GDP as a welfare proxy

GDP growth is popularly spoken of as though it were the ne plus ultra of economic policy; if growth is high, policy is succeeding, and if it's low, it's failing. Exogenous effects aside, GDP is not a perfect proxy for what economists call "welfare", namely how well off everyone is. One illustration of the discrepancy was recently given by Mankiw; longer ago the misuse of GDP was decried by Bobby Kennedy*. The best defense of the use of GDP in these ways has been that, while it doesn't conceptually capture everything it should, it's likely to correlate with welfare, and that eras of high GDP growth tend to be better for welfare growth than other eras. (I've made this argument myself.)

As Robert Lucas noted, though, correlations can be true under certain policy regimes but not others; in particular, policy tailored to a historical correlation, by creating an incentive by policy-makers to optimize a single (imperfect) measure of welfare rather than (unmeasurable) welfare itself, is likely to reduce that measure's correlation with welfare. Just as a chandelier factory in the USSR, told it would be paid by weight for its product, produced the heaviest chandeliers in the world, the focus on a particular measure will optimize that measure, both in ways that optimize what it should be measuring, and in ways that do not. As Mankiw pointed out, it's possible to design stimulus that increases GDP but not welfare. If GDP is being optimized, those forms of stimulus will look like a good idea.

In every popular, simple, short-term policy model of the economy — I'm thinking in particular of a sticky-wages model for the effects of unexpected inflation, but I've also thought in the last couple days that this is likely true of a simple microeconomic analysis of Keynesian demand-pumping — a boost in GDP comes at the expense of welfare. Unexpected inflation reduces real wages, so that workers work more than they would prefer at that wage; a deficit reduces savings, boosting consumption at the expense of capital accumulation. Other sticky prices or other mechanisms that these models leave out might change things, and certainly a good argument for boosting GDP is the psychological effect it has — the recession-as-a-coordination-problem model — and I'm pretty sure that in both cases I give above, the GDP boost is first-order while the welfare loss is second-order, so that a small error of analysis is likely to change the qualitative outcome. Still, it seems worth remembering that there is a distinction, and worth occasionally asking whether something targeted at GDP as a proxy for welfare is actually welfare-enhancing or not. I'm not sure Keynesian stimulus usually or always is.

* I would quibble with some of what Kennedy says, e.g. that GDP counts "destruction of our redwoods and the loss of our natural wonder". A better welfare measure would subtract environmental losses; GDP does not include additions for them, but does include additions for products that entail those losses. In any case, his broad thesis is correct.

Friday, January 2, 2009

time-ordering and information-ordering

There is a famous puzzle, which some googling suggests is known as Newcomb's paradox, involving an expert on human nature (or something) who presents each player of a game with two envelopes, one of which the player knows to contain $1000. The player is permitted to receive either just the other envelope, or both envelopes; if this expert believes both envelopes will be taken, the second envelope is empty, while if the expert believes that only that second envelope will be taken, then it contains $1,000,000. After observing several other players, for each of whom the expert's prediction was correct, do you choose to accept both envelopes, or do you decline the $1,000 to take just the second?

My answer is that I take only the second envelope. I don't know what's going on in precise detail, but it appears to me that, one way or another, my decision is available to the expert when the envelopes are sealed. I apparently take my action after the expert acts first, but, the way the game appears to me, the information I have available when I act is circumscribed — I don't know what's in the second envelope — but the expert's decision is made knowing what I will do. The game, in information order, is that I make my decision, and then the expert places the checks, even though that is not the time-ordering of events.

There are a lot of situation in which uncertainty is of importance in economics, and it is very rarely the case that it matters whether the uncertainty is due to a lack of knowledge about the present or a lack of knowledge about the future. If you and I are stuck together for six hours, and we know that a football game has taken place during that time but that neither of us knows how it has gone, it is just as reasonable for us to bet on it at the end of six hours as at the beginning; in the former case we are betting on an event that has, in a time sense, already happened, but for which we are just as uninformed as if it hadn't taken place yet. Similarly, if I am about to have a test done to determine whether I have a genetic predisposition to some disease, it seems reasonable to ask an insurance company to provide me insurance against an adverse result, provided I don't initially know any more than the insurance company does, even though the genes are already there and the information in some sense already exists.

Studying actions of policy-makers or financial markets is invariably complicated by causal relationships running both directions in time; the stock market may rise because the economy is likely to improve in six months, but the economy may improve because the stock market rose. In that case, the effects likely reintensify their own causes — a "positive feedback loop" — but there are negative (i.e. stabilizing) feedback loops as well. Monetary economists speak of a "price puzzle" when one does a naive analysis of the effect of monetary policy on the economy, where tighter monetary policy seems to be followed by an increase in inflation for a short period of time; this is what one would expect if monetary policy is being done competently — the monetary authority should tighten policy when an increase in inflation is coming. Because the earlier event is being taken on the basis of anticipation of the later event, the causal relationship runs backward in time (though, in these cases, it runs forward as well).

I think the real-world solutions to a lot of game theory conundrums — incidentally, I've done less reading on this than I should — involve effects of this nature. People will work out that a repeated Prisoners' dilemma can yield cooperation, at least for a while, so long as future results are discounted relative to current ones, or some such, but, while time-preferences can be screwy and extreme, it usually seems to require too big a discount to generate the results you see in experiments (or real life), and almost certainly isn't in accord with how the agents themselves would describe their rationales. They might talk in moral terms, but it seems likely to me that a certain amount of what is going on is that people know that other people are somewhat cooperative, and — especially in real life — they believe they can tell "what kind of person" some counterparty to some arrangement is. Insofar as one can be read ahead of time, one is at least partially precommitted before the game formally begins.

microspeculation

I first used the term "microspeculation" to refer to people topping off their gas tanks in advance of a coming hurricane; clearly some of the demand for gasoline was being driven not by a consideration of immediate need as weighed against the current price but by the expectation that future needs could be better met now than at the price at which any gasoline might be available a few days later. The phenomenon is much more pervasive, though, in less blatant terms; many people will buy something because its price is lower than what they consider typical for the item, and, to the extent that this is rational, it is often on the presumption that one would like to consume some of the item occasionally at the given price, and that the given price is low relative to an alternative price at which one might be able to buy it at the future. Perhaps more clearly, someone will forego a purchase at a higher price than was expected, not because that price exceeds its value to the purchaser, but in anticipation of being able to get a better deal later. If the current price were expected to prevail for a long period of time, the buyer would be better off purchasing it immediately, but is holding off in speculation that the price will come down. By and large, microspeculation is characterized by its size (small), its pervasiveness, and by the lack of intention to sell; one is substituting a purchase at one time for a purchase at another time, rather than performing an actual sale on the visible market.

For goods that can be stored in a straightforward manner microspeculation can be effected through "stocking up" on an item at what seems to be a temporarily low price; insofar as an item cannot be stored, the only way this comes into play is in a taste for diversification over time. Consumption of fresh fruit, for example, may be more sensitive to price changes in the short run than in the long run because one can gain more pleasure from eating apples during some portion of the year if one is comparatively deprived of them the rest of the year than if consumption is steady. Canned fruit, on the other hand, can be more readily stored, and short-run price elasticity can be driven higher (relative to long-run price elasticity) because one can "stock up" in anticipation of rising prices; while the response of fresh fruit sales to price changes is limited to one's willingness to consume it now, canned fruit can be purchased now to be consumed later.

To some extent this is a question of technical substitutability; it is easy to turn a can of canned fruit now into a can of canned fruit tomorrow (just wait 24 hours), while fresh fruit will start to deteriorate, and can not be so substituted. This is different from the question of consumption substitution, but from the standpoint of the visible market, it looks the same, and any attempt to measure substitution of consumption is likely to include this as well.

While some price-stickiness is surely "behavioral", in the sense that it probably can't be put on a strictly rational basis, I imagine that a fair amount of price-stickiness originates in microspeculation on the part of market participants' responding much more elastically to changes in conditions than they might if they believed that every change was permanent.