I've recently read in a macroeconomics textbook a comment that the development of rational expectations models was necessary because other expectations models are ad hoc and not well grounded in theory. This is, as a historical matter, largely true of the models that Lucas critiqued 34 years ago; I'm not sure it's necessarily the case of any model that eschews expectations that are fully statistically accurate in the sense that "rational expectations" means to modern economists.*
By way of illustration, I was playing, a month ago, with a rational expectations model rather like the common modern Dynamic Stochastic General Equilibrium models; it linearized to a set of equations equating linear combinations of variables at a time with linear combinations of the agent-expected values of the same variables at the next time. As is the case in modern DSGE models, the coefficients of these linear equations were somewhat complicated functions of underlying parameters. Using rational expectations, agent-expected values were set to statistically-expected values, given the underlying parameters and the distribution of exogenous shocks. The linearized system, with this rational expectations assumption, could be fairly easily solved by finding stable and unstable modes and associating control variables with convergent expressions in expectations for the future, with state variables as convergent expressions in shocks from the past. This is all standard in modern macroeconomics.
Because state variables are expressed as linear combinations of one-period-before state variables plus shocks, however, it is the case in this model that the vector of state variables follows a VAR(1) process. If the state variables are directly observed, it's not remotely "ad hoc" for agents to form expectations based on a VAR(1) regression, especially if the relationship between the coefficients is such that, for any VAR(1) coefficient, there will be a set of underlying parameters that supports that coefficient. More generally, it would seem reasonable to expect that agents would infer the underlying parameters econometrically from past data. It is my understanding — though I am not perfectly clear on this — that rational expectations rejects this.
It may be that this is rejected because, with data going back far enough, agents in a model of this sort would have arbitrarily close estimates for the parameters. In this case, it's reasonable to note that the model, not being perfect or perfectly comprehensive, is, at best, an approximation that works well over finite periods of time, similar to what high-energy physicists would call an "effective-field theory". The underlying parameters may be robust to the Lucas critique, at least within a reasonable domain, and yet not perfectly stationary. It can be useful, even without bounded rationality, to suppose that expectations would be formed over a finite window, or one that weights more recent observations more heavily; with bounded rationality, of course, such an alteration to the model requires no other justification. In any case, asking agents to form expectations in a situation in which they are uncertain about the deep parameters of the model, and infer it only through the observation of macroeconomic variables, reintroduces nonlinearities that are very different from those that were linearized away in the first place, and it seems likely to me they would give behavior that would be interesting, whether or not it actually proved to fit the data better than the rational expectations models.
* I don't want to get sucked into recounting a full history of macroeconomics and macroeconometrics over the last 50 years; I will say that there are nice attributes of the assumption of rational expectations, and, as is so often the case, I tend to feel that its most ardent proponents understate its shortcomings, but its most ardent opponents underappreciate its benefits, and almost always fail both to appreciate its history and to actually understand the somewhat limited scope of what it is used to mean.
Friday, December 31, 2010
Thursday, December 23, 2010
maturity transformation and the firm
One of the primary roles that has traditionally been ascribed to banks is "maturity transformation"; interest rates for long-term loans are higher than those for short-term loans, presumably because (this sort of thing is often the cause of price differences) borrowers are more interested in borrowing for long periods and savers are more interested in lending for short periods. Banks borrow short and lend long, making money on the spread and matching the long-duration borrowers with short-duration savers.
There's a blog post from two months ago newly making the rounds questioning the benefits of maturity transformation, in part arguing that 1) more and more savings are now in the form of retirement planning, and thus are longer-dated, 2) that a lot of borrowing is by companies for working capital — and, he doesn't make this observation, but more investment these days goes into software and electronics, i.e. 3-5 year duration capital, compared to giant factories, i.e. 40 year duration capital, than was the case a generation and two ago — and 3) some of the borrowers are starting to borrow on the short end for the same reason banks traditionally have.
Well, point 3 I would argue is largely a substitution effect in response to prices; he's not so much arguing that there is no mismatch as that the pricing difference it sustains will be a bit more slack. Points 1 and 2 suggest that there should, quantitatively, be less mismatch. This, too, should tend to show up in yield curves, and, while it's hard to separate the noise from the signal, it's not clear that this is true either. It's hard to see his story in the macro data, and it's hard for me to imagine there's anything worth doing about it whether it is or not. (Not that there's anything wrong with that; a lot of interesting ideas are worthwhile even if they don't have immediate practical impact.) Ultimately he concedes — that this is a concession, too, is not his observation — that
It struck me, on reading this, that one obvious way in which savers, "who generally have a preference to be able to access their funds quickly," can lend long while maintaining this is negotiability of the loans, i.e. that a liquid market for corporate bonds essentially allows the corporate borrower to lock-in a rate while the lender, while not shielded from all risks, is at least able to sell the bonds for cash fairly quickly; as long as the need for cash is idiosyncratic, the lender is reasonably likely to get back about the amount lent (saved) with some accrued interest to boot. For a bank to lend instead of the ultimate saver amounts to the mediation of what might otherwise be a market transaction taken into a firm.
This brings us to Ronald Coase, who will be celebrating his 100th birthday next week. What determines whether activity is undertaken within a organized economic entity or between entities is a function of the relative costs of transactions versus management; if finding buyers for your bonds (or a bond issuer for your spare cash) is more expensive than managing a bank, then people can be expected to save at and borrow from the bank, while if it's relatively cheap to work through the bond market, that's what we should expect to happen. As it happens, much of the relevant "transaction cost" here is likely to be informational, related to credit risks — which is perhaps why that is one of the risks that is apparently, in practice, borne by the banks. It also seems like the bank-management cost is more likely to scale with the size of the borrower than is the case with a bond placement — a big borrower will be well-known, easier for lenders to appraise — and suggests that bank lending should predominate to small businesses and individuals, with more big companies borrowing more from decentralized financial markets — which, again, seems to be what we see.
There's a blog post from two months ago newly making the rounds questioning the benefits of maturity transformation, in part arguing that 1) more and more savings are now in the form of retirement planning, and thus are longer-dated, 2) that a lot of borrowing is by companies for working capital — and, he doesn't make this observation, but more investment these days goes into software and electronics, i.e. 3-5 year duration capital, compared to giant factories, i.e. 40 year duration capital, than was the case a generation and two ago — and 3) some of the borrowers are starting to borrow on the short end for the same reason banks traditionally have.
Well, point 3 I would argue is largely a substitution effect in response to prices; he's not so much arguing that there is no mismatch as that the pricing difference it sustains will be a bit more slack. Points 1 and 2 suggest that there should, quantitatively, be less mismatch. This, too, should tend to show up in yield curves, and, while it's hard to separate the noise from the signal, it's not clear that this is true either. It's hard to see his story in the macro data, and it's hard for me to imagine there's anything worth doing about it whether it is or not. (Not that there's anything wrong with that; a lot of interesting ideas are worthwhile even if they don't have immediate practical impact.) Ultimately he concedes — that this is a concession, too, is not his observation — that
the most significant proportion of the difference between long-end and short-end rates comes from the interest rate differential which most banks hedge out to a large degree (ironically with pension funds and insurers).Which is to say, it is not ultimately so much banks that are doing the maturity transformation but "to a large degree" "with pension funds and insurers"; the long-term savers are finding the long-term borrowers, with banks as intermediaries.
It struck me, on reading this, that one obvious way in which savers, "who generally have a preference to be able to access their funds quickly," can lend long while maintaining this is negotiability of the loans, i.e. that a liquid market for corporate bonds essentially allows the corporate borrower to lock-in a rate while the lender, while not shielded from all risks, is at least able to sell the bonds for cash fairly quickly; as long as the need for cash is idiosyncratic, the lender is reasonably likely to get back about the amount lent (saved) with some accrued interest to boot. For a bank to lend instead of the ultimate saver amounts to the mediation of what might otherwise be a market transaction taken into a firm.
This brings us to Ronald Coase, who will be celebrating his 100th birthday next week. What determines whether activity is undertaken within a organized economic entity or between entities is a function of the relative costs of transactions versus management; if finding buyers for your bonds (or a bond issuer for your spare cash) is more expensive than managing a bank, then people can be expected to save at and borrow from the bank, while if it's relatively cheap to work through the bond market, that's what we should expect to happen. As it happens, much of the relevant "transaction cost" here is likely to be informational, related to credit risks — which is perhaps why that is one of the risks that is apparently, in practice, borne by the banks. It also seems like the bank-management cost is more likely to scale with the size of the borrower than is the case with a bond placement — a big borrower will be well-known, easier for lenders to appraise — and suggests that bank lending should predominate to small businesses and individuals, with more big companies borrowing more from decentralized financial markets — which, again, seems to be what we see.
Tuesday, November 9, 2010
the value of money
I wonder whether there has been an attempt to estimate empirically the NPV of the liquidity value of a dollar in cash.
search theory and adverse supply
I heard a hardware store owner on the radio saying that, the high unemployment rate notwithstanding, he's having trouble finding candidates who have the skills he needs -- knowing their way around tools, paint, etc., to be able to help customers. It seems superficially plausible that an increase in the number of people looking for jobs would adversely impact an employer if it induces applicants to look for jobs for which they are less well suited, imposing higher information costs on an employer.
Friday, September 17, 2010
money and bounded rationality
The three functions of money, in freshman economics, are as a store of value, a unit of account, and, last but most, a medium of exchange. Writers occasionally construct examples of something serving one purpose but not another, but by and large crippling any of these functions will to some extent cripple all of them. This is especially true in the case of storing value. If a prospective currency can't store value, then users have to use it in a hurry, and the buying and selling process that a medium of exchange is supposed to allow to be separate are forced to be more temporally proximate; in the extreme, if you have to spend your money within minutes of getting it, you're largely stuck looking for someone who wants to buy what you want to sell and vice versa, and you're effectively back to a barter economy. Similarly, it becomes less useful as a unit of account if its value can't be counted on.
Money as a store of value is also a great technology for savings. A (closed) economy as a whole can "save" only by investing resources in capital — physical capital, intellectual capital (technology), human capital (education) — but an individual can "save" by lending to others who wish to borrow, and in a large economy with a well-run central bank and so on, this typically works better.* If I have a good year, and expect less year will be worse, I can simply pile up cash; if I get paid a lump sum for a contract job, I can spread my spending over the time until my next job. If I want to make a big purchase, I don't have to make a big sale at the same time; I can save up ahead of time — or afterward, by borrowing at the time of purchase and then paying back the loan. The ability to choose between spending today or tomorrow is as valuable as the ability to choose between apples and grapefruit.
It's often noted, and, especially recently, quite notable, that people who are bad at saving money often lose much of this ability. If you don't have the self-control to let hundreds of dollars go unspent for a few weeks, you can't trade spending now for spending later. (If you buy durable goods, you can stretch some of your consumption into the future, but not quite as effectively.) The money, in this case, doesn't provide an effective store of value. This also, as I noted in the first paragraph, will affect its ability even to function as a medium of exchange.
I recently read an example of this; unfortunately, I don't now remember it. It was something along the lines, though, of a person who had trouble saving, and was engaging in a certain amount of barter (and running into the attendant double-coincidence-of-wants problems) due to an inability to build up even enough "working cash" to engage in what I think of as normal quotidian economic activity. If I remember or otherwise reacquire the example, I'll probably update this post.
* It occurs to me I've taken for granted that holding onto currency is the equivalent of lending to the central bank (or "bank of issue", if we still had non-central banks of issue), and that keeping money in a bank account is making a loan to that bank. If that's not obvious, take my word for it.
Money as a store of value is also a great technology for savings. A (closed) economy as a whole can "save" only by investing resources in capital — physical capital, intellectual capital (technology), human capital (education) — but an individual can "save" by lending to others who wish to borrow, and in a large economy with a well-run central bank and so on, this typically works better.* If I have a good year, and expect less year will be worse, I can simply pile up cash; if I get paid a lump sum for a contract job, I can spread my spending over the time until my next job. If I want to make a big purchase, I don't have to make a big sale at the same time; I can save up ahead of time — or afterward, by borrowing at the time of purchase and then paying back the loan. The ability to choose between spending today or tomorrow is as valuable as the ability to choose between apples and grapefruit.
It's often noted, and, especially recently, quite notable, that people who are bad at saving money often lose much of this ability. If you don't have the self-control to let hundreds of dollars go unspent for a few weeks, you can't trade spending now for spending later. (If you buy durable goods, you can stretch some of your consumption into the future, but not quite as effectively.) The money, in this case, doesn't provide an effective store of value. This also, as I noted in the first paragraph, will affect its ability even to function as a medium of exchange.
I recently read an example of this; unfortunately, I don't now remember it. It was something along the lines, though, of a person who had trouble saving, and was engaging in a certain amount of barter (and running into the attendant double-coincidence-of-wants problems) due to an inability to build up even enough "working cash" to engage in what I think of as normal quotidian economic activity. If I remember or otherwise reacquire the example, I'll probably update this post.
* It occurs to me I've taken for granted that holding onto currency is the equivalent of lending to the central bank (or "bank of issue", if we still had non-central banks of issue), and that keeping money in a bank account is making a loan to that bank. If that's not obvious, take my word for it.
Tuesday, September 14, 2010
search theory and prostitution
There's a subfield of macroeconomics dealing with "search theory", though aside from the mathematics (which resembles macroeconomics) it's an issue that's closer in nature to microeconomics. Most macro models seem largely to ignore it, at least as a direct matter; there may be sticky prices or market power layered onto a model that might come from search issues but is typically simply taken as exogenous. There's a certain amount of demand, a certain productive capacity, and the people who want to buy stuff buy stuff from the people who want to sell stuff.
For a lot of goods, that's not an especially good description (though, to be fair, how good a description it is depends on exactly what you care about, and for a lot of macroeconomic treatments it may be adequate). One of the primary roles of advertising, and one of the benefits of being a large (and long-lived) firm, is in the ability of people who want to buy what you're selling to find you. If you open a new business selling erasers, and somebody needs to buy an eraser in your first month in business, there's a good chance they don't know about you. If they've walked by your store a few times a week for the past several years, seen your ads on the subway, and maybe bought an eraser or two from your shop in the past, when they need an eraser, they know where they can get one. Being the answer to the question, "Hey, do you know where I can get an eraser?" is enormously valuable capital.
For illicit markets, it's that much harder; you want people looking to buy to be able to find you, but you don't want the police to. This can be handled in a few different ways. One is ambiguity; you generate a signal that is understood, but perhaps is not explicit enough to be grounds for arrest, or, ideally, even especially heavy levels of suspicion. Word of mouth, where your position in the market is known disproportionately to people whom you have reason to trust, also helps, as do repeat business relationships.
A new paper by Steven Levitt and Sudhir Venkatesh, which apparently I'm not supposed to cite, but let's hope this is okay, discusses the market for prostitution in Chicago.
In a perfectly competitive market, "price discrimination" is unsustainable; if you try to charge less than the going rate to some customers and more than the going rate to others, you only get the customers whom you're charging less. There are a fair number of pimps and prostitutes in the given neighborhoods, and while it's possible they engage in collusion (tacit or explicit, concious or not), this seems likely to be evidence of the search problem; if you charge a white customer more, it's not that easy for him to go find another seller who will charge him the lower rate, so your only real competition is with his going without.
There have been some arrests of Indian actresses for prostitution, in part, it seems, because being an actress is one of these ambiguous signals:
How information of this nature makes it through an economy is of particular interest to me, and is one of the things I can imagine myself studying over the next five years.
For a lot of goods, that's not an especially good description (though, to be fair, how good a description it is depends on exactly what you care about, and for a lot of macroeconomic treatments it may be adequate). One of the primary roles of advertising, and one of the benefits of being a large (and long-lived) firm, is in the ability of people who want to buy what you're selling to find you. If you open a new business selling erasers, and somebody needs to buy an eraser in your first month in business, there's a good chance they don't know about you. If they've walked by your store a few times a week for the past several years, seen your ads on the subway, and maybe bought an eraser or two from your shop in the past, when they need an eraser, they know where they can get one. Being the answer to the question, "Hey, do you know where I can get an eraser?" is enormously valuable capital.
For illicit markets, it's that much harder; you want people looking to buy to be able to find you, but you don't want the police to. This can be handled in a few different ways. One is ambiguity; you generate a signal that is understood, but perhaps is not explicit enough to be grounds for arrest, or, ideally, even especially heavy levels of suspicion. Word of mouth, where your position in the market is known disproportionately to people whom you have reason to trust, also helps, as do repeat business relationships.
A new paper by Steven Levitt and Sudhir Venkatesh, which apparently I'm not supposed to cite, but let's hope this is okay, discusses the market for prostitution in Chicago.
Even for a given sex act, however, the prices paid by black customers are systematically lower than for other customers. These differences appear to be attributable to price discrimination on the part of the prostitutes.
In a perfectly competitive market, "price discrimination" is unsustainable; if you try to charge less than the going rate to some customers and more than the going rate to others, you only get the customers whom you're charging less. There are a fair number of pimps and prostitutes in the given neighborhoods, and while it's possible they engage in collusion (tacit or explicit, concious or not), this seems likely to be evidence of the search problem; if you charge a white customer more, it's not that easy for him to go find another seller who will charge him the lower rate, so your only real competition is with his going without.
There have been some arrests of Indian actresses for prostitution, in part, it seems, because being an actress is one of these ambiguous signals:
Because of the sexualized roles they play, and the fact that many are in scandalous "live-in relationships"— meaning they move in with boyfriends before marriage—the blanket assumption is that all actresses are available for a price. This is obviously false, but it's an illusion that has been exploited by savvy pimps who have created a market for B-list and C-list "starlets"—often unsuccessful actresses from questionable backgrounds—for men who want to have what's sold as a glamorous sexual experience.Being an actress is not per se actionable by the police, but it might help someone looking for prostitution to find you.
How information of this nature makes it through an economy is of particular interest to me, and is one of the things I can imagine myself studying over the next five years.
Tuesday, August 31, 2010
math -- adjoints and preimages
This post is in large part me trying to remember math from 15 years ago that might be useful to me.
If I have sets V and W, I can define duals V* and W* as function spaces, V*={v|v:V→R}, i.e. the spaces of functions from the space into (in this case) R. Given a function f:V→W, then given w*∈W*, we can define v*∈V*: ∀ v∈V, v*(v)=w*(f(v)). Thus f implies a map f*:W*→V*, the adjoint of f.
In the case in which V and W have extra structure, I usually define my dual to be a restricted set of the duals defined above; in particular, if V and W are vector spaces, I can define V* and W* to include only linear maps. Then any linear map from V to W implies a linear adjoint from W* to V*. There exist isomorphisms between V and V*; choosing such an isomorphism is equivalent to choosing something of an inner product, 〈v1,v2〉=v1*(v2), though one would typically want to restrict the choice of isomorphism such that this is symmetric. Defined this way, V** is (canonically isomorphic to) V, at least for finite dimensions.
I also don't need to use R; in particular (and the main point of this post), if I use instead the set {0,1} then V* is (canonically isomorphic to) the set of subsets of V (the relevant subset being the set of elements such that v*(v)=1), and the adjoint f*:W*→V* takes sets in W to their preimages (under f).
If I have sets V and W, I can define duals V* and W* as function spaces, V*={v|v:V→R}, i.e. the spaces of functions from the space into (in this case) R. Given a function f:V→W, then given w*∈W*, we can define v*∈V*: ∀ v∈V, v*(v)=w*(f(v)). Thus f implies a map f*:W*→V*, the adjoint of f.
In the case in which V and W have extra structure, I usually define my dual to be a restricted set of the duals defined above; in particular, if V and W are vector spaces, I can define V* and W* to include only linear maps. Then any linear map from V to W implies a linear adjoint from W* to V*. There exist isomorphisms between V and V*; choosing such an isomorphism is equivalent to choosing something of an inner product, 〈v1,v2〉=v1*(v2), though one would typically want to restrict the choice of isomorphism such that this is symmetric. Defined this way, V** is (canonically isomorphic to) V, at least for finite dimensions.
I also don't need to use R; in particular (and the main point of this post), if I use instead the set {0,1} then V* is (canonically isomorphic to) the set of subsets of V (the relevant subset being the set of elements such that v*(v)=1), and the adjoint f*:W*→V* takes sets in W to their preimages (under f).
Monday, June 21, 2010
credit risk on treasury securities
I've heard for 20 years that US debt carries no credit risk because the government can just print dollars if it ever gets into trouble. This, on a large scale, would be expected to create massive inflation, reducing the real value of debt denominated in nominal dollars in addition to the direct effect of using printed dollars to pay whatever debt is coming due. The whole set of phenomena goes under the term "monetizing the national debt", and its possibility is part of the reason Treasury securities are considered "risk-free".
Of course, if you're owed $1 and are being repaid with $1 that buys what you expected 10¢ to buy you, you might well object to the notion that you faced no "risk". Indeed, you're in the same practical situation as if the Treasury had defaulted and declared that it would only pay 10¢ on the dollar to holders of its debt. In a technical sense the former is not "default", while the latter is, and the former possibility is not subsumed under the concept of "credit risk", while the latter is. (It's worth noting that, in the former event, dollar-denominated bonds issued by, e.g., GE would be paying 10¢ on the dollar in purchasing value as well, even with no complicity from GE; this is a logical reason to keep the concepts separate.)
Massive inflation would wreak a great deal of havoc on the economy, of course; the textbook purposes of "money" are to serve as a medium of exchange, a unit of account, and a store of value, but once its value becomes unreliable it becomes less useful as a unit of account and even its ability to serve as a medium of exchange is crippled, as people become more and more reluctant to hold it and tend to buy things less because they want them and more because they're available when one finds oneself in possession of cash. Hyperinflationary environments see a move toward barter, with the "double coincidence of wants" problem that money is supposed to solve, and make intermediation between savings and investment much more difficult, screwing up capital allocation.
Technical default, of course, would scare off future lenders, but so would inflationary "default"; indeed, if it affects lenders (and potential lenders' expectations of their subsequent treatment) in exactly the same way, it should scare them off to exactly the same extent, at least to first order. At higher orders, the fact that inflation destroys the tax base in ways that default does not — and this tax base underlies any value that lenders can actually be paid back — suggests that lenders should be more averse to the inflationary scenario than the default scenario.
If I were at the Treasury, and were abruptly confronted with the choice of monetizing a large chunk of debt or defaulting on it, I would prefer to default. If we're locking the government out of the capital markets, we might as well not take the whole economy down, too. If bond markets were rational, they would respond to this news by lowering (slightly) the interest rate the the government pays on its debts. I'm not entirely sure, though, that they are.
Of course, if you're owed $1 and are being repaid with $1 that buys what you expected 10¢ to buy you, you might well object to the notion that you faced no "risk". Indeed, you're in the same practical situation as if the Treasury had defaulted and declared that it would only pay 10¢ on the dollar to holders of its debt. In a technical sense the former is not "default", while the latter is, and the former possibility is not subsumed under the concept of "credit risk", while the latter is. (It's worth noting that, in the former event, dollar-denominated bonds issued by, e.g., GE would be paying 10¢ on the dollar in purchasing value as well, even with no complicity from GE; this is a logical reason to keep the concepts separate.)
Massive inflation would wreak a great deal of havoc on the economy, of course; the textbook purposes of "money" are to serve as a medium of exchange, a unit of account, and a store of value, but once its value becomes unreliable it becomes less useful as a unit of account and even its ability to serve as a medium of exchange is crippled, as people become more and more reluctant to hold it and tend to buy things less because they want them and more because they're available when one finds oneself in possession of cash. Hyperinflationary environments see a move toward barter, with the "double coincidence of wants" problem that money is supposed to solve, and make intermediation between savings and investment much more difficult, screwing up capital allocation.
Technical default, of course, would scare off future lenders, but so would inflationary "default"; indeed, if it affects lenders (and potential lenders' expectations of their subsequent treatment) in exactly the same way, it should scare them off to exactly the same extent, at least to first order. At higher orders, the fact that inflation destroys the tax base in ways that default does not — and this tax base underlies any value that lenders can actually be paid back — suggests that lenders should be more averse to the inflationary scenario than the default scenario.
If I were at the Treasury, and were abruptly confronted with the choice of monetizing a large chunk of debt or defaulting on it, I would prefer to default. If we're locking the government out of the capital markets, we might as well not take the whole economy down, too. If bond markets were rational, they would respond to this news by lowering (slightly) the interest rate the the government pays on its debts. I'm not entirely sure, though, that they are.
Monday, May 17, 2010
Malthusian traps -- jobs
Jobs are in a Malthusian trap. Well, not so much right now, but over the long-term,
Most creatures in world history are and have been in Malthusian traps, with occasional exceptions. Immediately after a mass die-out, there will be fewer limitations on growth, supposing that an animal's food supply is returning more explosively than the animal is. Similarly, there has been a mass die-off of jobs in the past couple years, and so from a short-term perspective it might be reasonable to regard jobs as being free to grow as quickly as they can ... reproduce. Well, I'm not sure how useful the analogy really was, anyway.
if you want to predict how many jobs the economy will create in the next two decades, I'd steer you to a demographic model. How many more people will want to be in the work force in 20 years than there are now? That's about how many jobs will be created.Jobs, then, are constrained by the scarce resource of workers.
Most creatures in world history are and have been in Malthusian traps, with occasional exceptions. Immediately after a mass die-out, there will be fewer limitations on growth, supposing that an animal's food supply is returning more explosively than the animal is. Similarly, there has been a mass die-off of jobs in the past couple years, and so from a short-term perspective it might be reasonable to regard jobs as being free to grow as quickly as they can ... reproduce. Well, I'm not sure how useful the analogy really was, anyway.
Thursday, May 13, 2010
Tiebout sorting and the rights of transients
My brother, an MIT graduate, ran for the city council of Cambridge, MA, among other things criticizing a comment by another candidate that the local college students shouldn't vote because they were only there for a short period of time. On some level this felt fallacious to me, insofar as the college students represent a particular set of interests; if there are 20,000 students at any given time, the 20,000 students who are there might be expected to represent the 20,000 students who will be there in 10 years, and the short time that any one student is there is (exactly) balanced by the large number of different students rotating through.
On the other hand, increasingly as government gets more local, I can think of a compelling case for restricting the franchise to people who have been there a while. In particular, I think we should do so where Tiebout sorting might be expected to operate well — where people have reasonable choice among and information about different communities in which to live — and where the near term is to be traded against the long term in a significant way. Where there are decisions to be made with long-term ramifications — should we raise taxes to build more classroom space? — allowing people to move in, vote for the short-term expedient, and move out before the long-term (relative) cost of that decision is to be borne results in all communities emphasizing the short-term expedients, while requiring that people live in a place for a few years before they vote — or perhaps buy property, or otherwise commit themselves to the community — allows for real sustainable differences in priority (some people want lower taxes and less spending on education, even including the long term ramifications, while others want more) where people with different preferences can sort themselves as Tiebout described.
MIT students have a certain amount of flexibility to live in Boston or Somerville, even if we don't consider choosing a different school to be a real choice; on the other hand, MIT institutionally has much less such flexibility, and the students, faculty, and so on will be affected by the city's policies. It would be good to have that represented somehow, but on many issues, I can see the value to excluding the transients from the decision.
On the other hand, increasingly as government gets more local, I can think of a compelling case for restricting the franchise to people who have been there a while. In particular, I think we should do so where Tiebout sorting might be expected to operate well — where people have reasonable choice among and information about different communities in which to live — and where the near term is to be traded against the long term in a significant way. Where there are decisions to be made with long-term ramifications — should we raise taxes to build more classroom space? — allowing people to move in, vote for the short-term expedient, and move out before the long-term (relative) cost of that decision is to be borne results in all communities emphasizing the short-term expedients, while requiring that people live in a place for a few years before they vote — or perhaps buy property, or otherwise commit themselves to the community — allows for real sustainable differences in priority (some people want lower taxes and less spending on education, even including the long term ramifications, while others want more) where people with different preferences can sort themselves as Tiebout described.
MIT students have a certain amount of flexibility to live in Boston or Somerville, even if we don't consider choosing a different school to be a real choice; on the other hand, MIT institutionally has much less such flexibility, and the students, faculty, and so on will be affected by the city's policies. It would be good to have that represented somehow, but on many issues, I can see the value to excluding the transients from the decision.
Wednesday, May 12, 2010
my bizarre market ideas
Teacher evaluations in New York will finally be allowed to be based partially on results, which is a step in a positive direction, if not without any possible flaws. One fear I've heard expressed is that grading teachers based on student performance will make teachers all the more averse to taking on difficult students, and I certainly think that students' incoming performance should be taken into account as well. If it isn't, using exit test scores becomes more arbitrary than if it is, and if teachers can maneuver to select their students, that obviously creates an incentive problem. One of the ideas I've had floating in my head is not to directly take account of students' previous performance, and also to allow teachers (or schools) to select their students, but to do so by bidding on them. Each teacher/school sets, for each prospective student, a standard for success, above which the teacher/school is rewarded in some sense and below which the teacher/school is punished in some sense, and the student goes to the teacher/school willing to bid the highest level of performance for that student.
I could actually defend this against some of the first objections that occur to me, and I'm not averse to doing so, but one of the points of this particular blog is an emphasis on brainstorming, and I have other things I want to do with my time right now.
I could actually defend this against some of the first objections that occur to me, and I'm not averse to doing so, but one of the points of this particular blog is an emphasis on brainstorming, and I have other things I want to do with my time right now.
Tuesday, May 11, 2010
Election Reform in the UK
One of the chief planks in the platform of the Liberal Democrats in the UK is a meta-plank, taking a position not on an issue directly but on the political process by which people who decide such things are selected. The Liberal Democrats have a middling amount of support spread widely over the UK, and routinely win fewer than 10% of seats available, while their candidates in aggregate get closer to 25% of the votes cast. They would like to, in effect, be able to take some of the votes that one of their candidates gets and shift them to other of their candidates, so as to elect more members of parliament.
Their prefered method of doing so is one of my favorite methods as well, Single Transferable Vote. Imagine that you group 10 districts together into one district that elects 10 people, subject to the rule that each voter gets one vote. Any candidate that gets at least 9.1% of the vote is guaranteed to win one of the positions. On a first pass, candidates might split the vote; perhaps two very similar candidates each get about 5% of the vote. If one the one with fewer votes drops out, leaving those voters to go to their second choice, they're likely to push the other candidate to 10%. Similarly, if one candidate gets 20% of the vote, some of those voters might be swayed to move to another, similar candidate, such that these 20% can elect 2 members of parliament instead of just one. Single Transferable Vote is a way of doing much of the necessary strategy automatically.
Instant Runoff Voting, which the Liberal Democrats call Alternative Vote, is the same process for a single member district. In this case, though, it simplifies to a process that many people find easier to understand: voters turn in ballots listing their top choice, their second choice, and so on; if no candidate has a majority of first-choice ballots, the candidate receiving the fewest votes is dropped from the race, and votes for that candidate are redistributed to the highest remaining choice, until a single candidate gets a majority. The Liberal Democrats have offered, as their second choice, that (for example) 10 districts would choose 15 members of parliament, with 10 elected using IRV in each district, and another 5 awarded to parties that received fewer members of parliament from those 10 districts than the proportion of their vote; if 3 parties each get 1/3 of the vote in those 10 districts taken as a whole, and two of them elect 4 members each in the district elections while the other only elects 2, then that last party gets to name another 3 members to parliament, and the other two each name 1 more. The exact numbers can be changed around, but this is roughly the way members of the Scottish National Parliament are elected. I dislike it because I think it gives parties too much power, but the UK is pretty far gone in that direction anyway.
For a single member district, IRV is worse in a number of ways than Condorcet's method and its variants. A "Condorcet winner" is a candidate who would get more votes than each opponent in a two-race against any of the other candidates on the ballot. If approximately 2/5 of the voters list their preferences as A, B, then C -- where A, B, and C are candidates -- and approximately 2/5 list C, B, then A, while 1/5 list B, A, C, then A will win an IRV election, while B would beat candidate A (with 60% of the vote) if C dropped out, or if C's voters ranked B first on their ballots in spite of their true preferences. If one of the objections to the current system is that people are given a single vote and often find it in their interest to vote for someone who is not their first choice, then IRV doesn't fully solve that problem. As might be clear from the example, Condorcet's method tends to pick compromise candidates; it also fully solves the "vote splitting" problem, in that if there is a Condorcet winner with a lot of candidates running, then if some of the candidates drop out that can't affect the winner of the race. IRV, like the first-past-the-post system, is prone to drop candidates because other candidates are like them. Still, I think IRV would be an improvement over FPTP.
Their prefered method of doing so is one of my favorite methods as well, Single Transferable Vote. Imagine that you group 10 districts together into one district that elects 10 people, subject to the rule that each voter gets one vote. Any candidate that gets at least 9.1% of the vote is guaranteed to win one of the positions. On a first pass, candidates might split the vote; perhaps two very similar candidates each get about 5% of the vote. If one the one with fewer votes drops out, leaving those voters to go to their second choice, they're likely to push the other candidate to 10%. Similarly, if one candidate gets 20% of the vote, some of those voters might be swayed to move to another, similar candidate, such that these 20% can elect 2 members of parliament instead of just one. Single Transferable Vote is a way of doing much of the necessary strategy automatically.
Instant Runoff Voting, which the Liberal Democrats call Alternative Vote, is the same process for a single member district. In this case, though, it simplifies to a process that many people find easier to understand: voters turn in ballots listing their top choice, their second choice, and so on; if no candidate has a majority of first-choice ballots, the candidate receiving the fewest votes is dropped from the race, and votes for that candidate are redistributed to the highest remaining choice, until a single candidate gets a majority. The Liberal Democrats have offered, as their second choice, that (for example) 10 districts would choose 15 members of parliament, with 10 elected using IRV in each district, and another 5 awarded to parties that received fewer members of parliament from those 10 districts than the proportion of their vote; if 3 parties each get 1/3 of the vote in those 10 districts taken as a whole, and two of them elect 4 members each in the district elections while the other only elects 2, then that last party gets to name another 3 members to parliament, and the other two each name 1 more. The exact numbers can be changed around, but this is roughly the way members of the Scottish National Parliament are elected. I dislike it because I think it gives parties too much power, but the UK is pretty far gone in that direction anyway.
For a single member district, IRV is worse in a number of ways than Condorcet's method and its variants. A "Condorcet winner" is a candidate who would get more votes than each opponent in a two-race against any of the other candidates on the ballot. If approximately 2/5 of the voters list their preferences as A, B, then C -- where A, B, and C are candidates -- and approximately 2/5 list C, B, then A, while 1/5 list B, A, C, then A will win an IRV election, while B would beat candidate A (with 60% of the vote) if C dropped out, or if C's voters ranked B first on their ballots in spite of their true preferences. If one of the objections to the current system is that people are given a single vote and often find it in their interest to vote for someone who is not their first choice, then IRV doesn't fully solve that problem. As might be clear from the example, Condorcet's method tends to pick compromise candidates; it also fully solves the "vote splitting" problem, in that if there is a Condorcet winner with a lot of candidates running, then if some of the candidates drop out that can't affect the winner of the race. IRV, like the first-past-the-post system, is prone to drop candidates because other candidates are like them. Still, I think IRV would be an improvement over FPTP.
core inflation
It seems to me that one potential problem with using "inflation excluding food and energy" as a measure of core inflation is that food and energy both have particularly low income elasticities of demand. The core inflation measured may be more pro-cyclical than overall inflation. Perhaps it's not a big enough effect to be a concern; it should also be noted that any effect that is primarily linear with the overall level of inflation, as this may be, would simply suggest a different calibration by economic agents for the use of the information, and wouldn't actually change the informational content of the data.
Sunday, April 25, 2010
rent-seeking cycles
I wonder to what extent loss-aversion causes wars and other conflict. In times of growth, there seems to be relatively less in the way of attempts at rent-seeking, but when things grow scarce — not in any uniform absolute sense, but compared to where they were before — the rent-seeking picks up. For any given agent, it seems to me that whether rent-seeking is optimal is likely to be independent of whether the resources available from productive activity are growing or shrinking, at least supposing the rents to be sought are likely to be growing or shrinking at the same time.
It could be heterogeneity; perhaps the people whose fortunes are suffering the most are the ones doing the rent-seeking. Given that and incomplete information, perhaps people take the growth or recession of their own fortunes over periods of time as informative of their relative position; if everyone's fortunes become dimmer, but everyone only knows that their own fortunes are dimmer, they try to plunder their neighbors' fortunes, not knowing that they, too, have been shrinking.
It could be heterogeneity; perhaps the people whose fortunes are suffering the most are the ones doing the rent-seeking. Given that and incomplete information, perhaps people take the growth or recession of their own fortunes over periods of time as informative of their relative position; if everyone's fortunes become dimmer, but everyone only knows that their own fortunes are dimmer, they try to plunder their neighbors' fortunes, not knowing that they, too, have been shrinking.
Saturday, April 17, 2010
The Nature of the Firm
I've been reading my Coase lately.
Agency costs are sometimes raised as a limit to the size of a firm; as a firm gets larger, agency costs get worse, and for small firms agency costs are tolerated to realize the benefits of lower transaction costs of other natures. I wonder, though, to what extent agency costs could be a reason for a firm, rather than a net cost.
Suppose I'm producing a finished product from an intermediate created by someone else. I'm bad at determining the quality of the intermediate good; if the finished product is shoddy, I don't know whether I screwed up or whether he did. Someone else, though, is pretty good at motivating the other guy, in one way or another, to produce higher quality intermediates. By joining his firm, I'm outsourcing to him the job of handling the agency costs I face in the vertically separated market structure.
Insurance companies that deal with companies often actively help the companies reduce their risks; rather than pay $50,000 to the insurance company for insurance, the firm pays $40,000 and gives the insurance company the authority to inspect the premises, upgrade the sprinklers, and improve the security system. It doesn't make sense for each company to separately involve itself in becoming expert in loss mitigation of this nature, so they outsource it to a firm that is happens to have a great deal of financial interest in loss mitigation both for this company and for others like it. Any company will have to deal with agency costs, though, and it similarly makes sense for people whose expertise lies elsewhere to allow a firm to deal with their agency costs for them.
Agency costs are sometimes raised as a limit to the size of a firm; as a firm gets larger, agency costs get worse, and for small firms agency costs are tolerated to realize the benefits of lower transaction costs of other natures. I wonder, though, to what extent agency costs could be a reason for a firm, rather than a net cost.
Suppose I'm producing a finished product from an intermediate created by someone else. I'm bad at determining the quality of the intermediate good; if the finished product is shoddy, I don't know whether I screwed up or whether he did. Someone else, though, is pretty good at motivating the other guy, in one way or another, to produce higher quality intermediates. By joining his firm, I'm outsourcing to him the job of handling the agency costs I face in the vertically separated market structure.
Insurance companies that deal with companies often actively help the companies reduce their risks; rather than pay $50,000 to the insurance company for insurance, the firm pays $40,000 and gives the insurance company the authority to inspect the premises, upgrade the sprinklers, and improve the security system. It doesn't make sense for each company to separately involve itself in becoming expert in loss mitigation of this nature, so they outsource it to a firm that is happens to have a great deal of financial interest in loss mitigation both for this company and for others like it. Any company will have to deal with agency costs, though, and it similarly makes sense for people whose expertise lies elsewhere to allow a firm to deal with their agency costs for them.
Tuesday, April 13, 2010
institutions and folk theorems
I think a lot of institutions are responses to the folk theorem; a repeated game admits a large number of possible equilibria, for various senses of the term "equilibrium", and institutions are a way to coordinate on one of them.
Thursday, February 11, 2010
teaching
If I'm ever teaching introductory economics, I think I might get all of my exams from the internet. I'll print excerpts of blog posts, news articles, and the like, and ask the student to itemize the flaws in the economic reasoning.
Wednesday, January 20, 2010
10-Q
The SEC should outlaw quarterly filings, on the grounds that they drive too much of a short-term mindset.
Friday, January 15, 2010
TTL cash
There has been some recent move toward creating a bankruptcy chapter better constructed to handle financial institutions; Luigi Zingales was talking about such things in October of 2008 (search for "Bebchuk" — note that his proposal makes legacy counterparties senior to bondholders, who are nominally pari passu), but there now seems to finally be an interest in this in Congress. One idea I've been batting around in my head for perhaps a year now, at least as a tool to help with these things, is the creation of a new kind of credit that the government could issue, which I call TTL cash; I know I've shared it with my brother, but I don't think I've mentioned it here.
The idea is that when a highly-connected company (AIG) gets into trouble, the government lets it go bankrupt, but any creditor who is thereby impaired is given $1 in TTL cash for every dollar the creditor lost to the bankruptcy. The TTL cash is essentially nontransferable (and thus useless) except in bankruptcy, where, in the idea's simplest form, the government redeems it for actual money. The government has not bailed out AIG, or its creditors, but it has bailed out the creditors of AIG's creditors, insofar as they are impaired by AIG's failure; the chain reaction that regulators are eager to avoid is arrested.
The slightly more general case would allow different levels of TTL cash, each of which is redeemed for the next one down; the government could decide instead to bail out the creditors of the creditors of the creditors by issuing level 2 TTL cash, redeemable in bankruptcy for level 1 TTL cash, redeemable in bankruptcy for the real thing. "TTL" stands for "time-to-live", a term used in IP, the internet protocol; each time a router forwards an IP packet to another router it decrements a TTL counter, so that if some routing error causes the packet to wander off in the wrong direction or go around in circles, it eventually gets dropped rather than continuing to be passed around.
This is, of course, still a bailout, but on a practical level the moral hazard issues are much reduced from a standard bailout; everyone is responsible for assessing the credit quality of their debtors, at least to a significant extent. One of the major practical strengths of a decentralized economy vis-à-vis a centralized one is that it recognizes the information-handling limits of agents and asks them largely to make decisions based only on local information. If you lend entities money, or even just enter into contracts with them, you have to know something about their financial condition and their other dealings that affect it; if you have to know everything about their business, including everything about their potential creditors; capping this two levels down under certain circumstances seems to me a reasonable moral hazard price to pay for the benefits of a distributed system.
The idea is that when a highly-connected company (AIG) gets into trouble, the government lets it go bankrupt, but any creditor who is thereby impaired is given $1 in TTL cash for every dollar the creditor lost to the bankruptcy. The TTL cash is essentially nontransferable (and thus useless) except in bankruptcy, where, in the idea's simplest form, the government redeems it for actual money. The government has not bailed out AIG, or its creditors, but it has bailed out the creditors of AIG's creditors, insofar as they are impaired by AIG's failure; the chain reaction that regulators are eager to avoid is arrested.
The slightly more general case would allow different levels of TTL cash, each of which is redeemed for the next one down; the government could decide instead to bail out the creditors of the creditors of the creditors by issuing level 2 TTL cash, redeemable in bankruptcy for level 1 TTL cash, redeemable in bankruptcy for the real thing. "TTL" stands for "time-to-live", a term used in IP, the internet protocol; each time a router forwards an IP packet to another router it decrements a TTL counter, so that if some routing error causes the packet to wander off in the wrong direction or go around in circles, it eventually gets dropped rather than continuing to be passed around.
This is, of course, still a bailout, but on a practical level the moral hazard issues are much reduced from a standard bailout; everyone is responsible for assessing the credit quality of their debtors, at least to a significant extent. One of the major practical strengths of a decentralized economy vis-à-vis a centralized one is that it recognizes the information-handling limits of agents and asks them largely to make decisions based only on local information. If you lend entities money, or even just enter into contracts with them, you have to know something about their financial condition and their other dealings that affect it; if you have to know everything about their business, including everything about their potential creditors; capping this two levels down under certain circumstances seems to me a reasonable moral hazard price to pay for the benefits of a distributed system.
Subscribe to:
Posts (Atom)