My most recent post invited, in a couple of places, tangents on "market manipulation". I decided to make that a separate post.
There has been some recent suggestion (mostly by uninformed parties) that corporate stock repurchases constitute "market manipulation", i.e. an attempt to artificially drive up the price of their stock. There is, in fact, apparently a safe harbor provision in relevant regulations dating back to 1980 that indicate that a company is presumed not to be manipulating the market in its stock provided that its purchases constitute no more than a particular fraction of the trade volume over a particular period of time, which suggests that the idea is not entirely novel. Relatedly, over the last ten years there have been complaints that countries, especially China, are "manipulating" the market for their currencies, and there are provisions in various trade agreements that apparently forbid that, apparently also without particularly well defining it.
My visceral response to the China accusation in particular, the first time I heard it, was "of course they're manipulating the market; that's their prerogative", with a bit of surprise that it was considered untoward; under certain circumstances intervention in the foreign exchange market seems like the easiest way to implement monetary policy, and I kind of think the US should have tried buying up assets in Iceland, India, Australia, and New Zealand at various times in the last seven years when the currencies of those countries have had moderately high interest rates. The purpose here is not to create prices that are incorrect; it's to change the underlying value of the currency. The same is true of stock repurchases, at least classically; repurchase of undervalued shares increases the value per share of the remaining shares, increasing their value, with the price rising as a consequence.
This is basically the distinction I make in these contexts; I have some sympathy for rules against trying to drive the price away from the value, whereas influencing the value of assets is often on some level the essential job of the issuer of those assets. While "value" is even more poorly defined than "price", this motivation is frequently somewhat better defined, if perhaps hard to witness; if a company releases false information, that will affect the price and not the value, whereas a repurchase, especially one that is announced in advance and performed in a reasonably non-disruptive way, is more likely an attempt to influence the value of the asset, which is well within the range of things the company ought to be doing.
Wednesday, September 30, 2015
prices and market liquidity
Assets don't have prices; trades have prices, and offers to trade have prices. When a market has a lot of offers to buy and a lot of offers to sell at very nearly the same price, the asset can be said informally to have that price (with that level of precision), and if trades take place frequently (compared to how often the price changes by an amount on the order of the relevant precision) you can (for most purposes) reasonably cite the most recent trade price as the asset price, but in corner cases it's worth remembering that that is emergent and only partial.
Most "open-ended" mutual funds these days allow you to buy or sell an arbitrary number of shares of the mutual fund from the mutual fund itself, which (respectively) creates or destroys those shares and sells them shortly after the market closing that follows the placing of the order; that is, if you place an order on a weekend before a Monday holiday, your transaction takes place late Tuesday afternoon, at the same time as if you place it early Tuesday afternoon. The price at which the transaction takes place is the "Net Asset Value" (NAV); the fund calculates the price of all of the assets it owns, divides it by the number of shares outstanding, and buys or sells the shares pro rata. For large cap stock mutual funds, this works quite smoothly almost always, and it's extraordinary for it to work poorly; the assets have closing auctions on various stock exchanges that tend to be fairly competitive and result in official "closing" prices that are fairly unbiased and accurate predictors of the price at which the asset may next be bought and/or sold (or, indeed, the prices at which they may have been bought or sold earlier that afternoon). These assets have prices to a sufficient extent to make this rule work.
There has been increasing concern lately about other kinds of funds, which own assets that do not have very well-defined prices, and here's an example of a fund whose client doesn't like how it made up the prices, but I tend to think the procedure was reasonable. The client sold a very large amount of shares back to the fund — the fund would have to sell assets (at the very least to get back to its usual level of cash holdings), and to value some of the assets that were particularly hard to sell, asked a few dealers how much it could sell them for. They got, naturally, a lower price than they would have had to pay to buy them, or the price at which they last traded; this resulted in a lower NAV than one of those higher prices would have.
It seems to be conventional to use "last-traded price" in many contexts where that isn't a particularly unbiased predictor of where the asset can be sold in the future; if bonds have dropped in price over the last couple days, and a mutual fund has a fair number of bonds that haven't traded in that time, the "last-traded price" is an overestimate of any meaningful "price" that the bond has now, and fund holders redeeming at the (inflated) NAV will be overpaid, to the disadvantage of continuing fund-holders. A bank — I think it was Barclays; I should have made a note — has indicated that, for the high-yield bond mutual funds it was looking at, this problem could be solved by letting sellers (i.e. people redeeming their mutual fund holdings) choose between a 2% discount in the price or a 30 day delay in settlement, either compensating the fund (i.e. the other, continuing investors in the fund) for the adverse selection problem or waiting until bond prices have updated.
I kind of think that funds (especially with illiquid assets) should quote bid-offer spreads that reflect the bid-offer spreads of the underlying assets; if all of the assets have bids that are 95% of their offers, the fund-wide spreads might be somewhat tighter than that, reflecting the fund's discretion in choosing which assets to sell and a level of liquidity support (cash holdings or perhaps a short-term line of credit, if that's allowed); they should probably, as with most active exchanges, depend to some degree on traded volume, so that large orders to sell would face larger discounts, and they should probably allow orders to buy to net off against orders to sell before hitting either the bid or the offer. What I'm mostly looking at is essentially a closing auction with the fund filling imbalances; as I may have noted before, open-ended funds with fees for orders that create imbalances and closed-ended funds that make markets in their own stocks kind of converge on each other, and that's what seems like the right approach to me.
Most "open-ended" mutual funds these days allow you to buy or sell an arbitrary number of shares of the mutual fund from the mutual fund itself, which (respectively) creates or destroys those shares and sells them shortly after the market closing that follows the placing of the order; that is, if you place an order on a weekend before a Monday holiday, your transaction takes place late Tuesday afternoon, at the same time as if you place it early Tuesday afternoon. The price at which the transaction takes place is the "Net Asset Value" (NAV); the fund calculates the price of all of the assets it owns, divides it by the number of shares outstanding, and buys or sells the shares pro rata. For large cap stock mutual funds, this works quite smoothly almost always, and it's extraordinary for it to work poorly; the assets have closing auctions on various stock exchanges that tend to be fairly competitive and result in official "closing" prices that are fairly unbiased and accurate predictors of the price at which the asset may next be bought and/or sold (or, indeed, the prices at which they may have been bought or sold earlier that afternoon). These assets have prices to a sufficient extent to make this rule work.
There has been increasing concern lately about other kinds of funds, which own assets that do not have very well-defined prices, and here's an example of a fund whose client doesn't like how it made up the prices, but I tend to think the procedure was reasonable. The client sold a very large amount of shares back to the fund — the fund would have to sell assets (at the very least to get back to its usual level of cash holdings), and to value some of the assets that were particularly hard to sell, asked a few dealers how much it could sell them for. They got, naturally, a lower price than they would have had to pay to buy them, or the price at which they last traded; this resulted in a lower NAV than one of those higher prices would have.
It seems to be conventional to use "last-traded price" in many contexts where that isn't a particularly unbiased predictor of where the asset can be sold in the future; if bonds have dropped in price over the last couple days, and a mutual fund has a fair number of bonds that haven't traded in that time, the "last-traded price" is an overestimate of any meaningful "price" that the bond has now, and fund holders redeeming at the (inflated) NAV will be overpaid, to the disadvantage of continuing fund-holders. A bank — I think it was Barclays; I should have made a note — has indicated that, for the high-yield bond mutual funds it was looking at, this problem could be solved by letting sellers (i.e. people redeeming their mutual fund holdings) choose between a 2% discount in the price or a 30 day delay in settlement, either compensating the fund (i.e. the other, continuing investors in the fund) for the adverse selection problem or waiting until bond prices have updated.
While this is mostly intended as a solution to the adverse selection problem, it also somewhat mitigates the problem I'm noting here; a fund with 30 days' advanced notice can sell assets more carefully than one trying to raise a lot of cash before 4:30 this afternoon.
I kind of think that funds (especially with illiquid assets) should quote bid-offer spreads that reflect the bid-offer spreads of the underlying assets; if all of the assets have bids that are 95% of their offers, the fund-wide spreads might be somewhat tighter than that, reflecting the fund's discretion in choosing which assets to sell and a level of liquidity support (cash holdings or perhaps a short-term line of credit, if that's allowed); they should probably, as with most active exchanges, depend to some degree on traded volume, so that large orders to sell would face larger discounts, and they should probably allow orders to buy to net off against orders to sell before hitting either the bid or the offer. What I'm mostly looking at is essentially a closing auction with the fund filling imbalances; as I may have noted before, open-ended funds with fees for orders that create imbalances and closed-ended funds that make markets in their own stocks kind of converge on each other, and that's what seems like the right approach to me.
Thursday, September 24, 2015
shortcomings of mathematical modeling
Over the course of the twentieth century, accelerating in the second half, the discipline of economics became increasingly mathematical, to the chagrin of some people. I myself think it has gone too far in that direction in many ways, but I feel like some of the critiques aren't exactly on point.
One of the benefits — perhaps most of the benefit — of mathematical modeling is that it forces a precision that is often easier to avoid in purely verbal arguments. This precision in certain contexts allows one to make deductions about the behavior of the model that goes beyond what is intuitive, for better or worse — most of the time it will ultimately drag intuition along with it. Certainly if you want a computer to simulate your model, it needs to be precise enough for the computer to simulate it. Further, any model in which forces are counteracting each other in interesting ways is going to have to be quantitative to some degree to be useful; if you want to know what effect some shock will have on the price of a good, and the shock increases both supply and demand for the good, if you don't have enough detail in your model to know which effect is bigger, you can't even tell whether the price will go up or down.
I think there are basically four problems one encounters with the mathematization, or perhaps four facets of a single problem; all of them are to varying degrees potential problems with verbal arguments as well, but they seem to affect mathematical arguments differently.
Ultimately, it may be that the best argument against excessive math in economics is that is has sometimes crowded out other ways of thinking; having some mathematical papers that are related to economics is a good thing, but if papers that do mathematics that is far removed from economics are displacing economic arguments that are hard to put in mathematical terms, then the discipline has moved well past the optimum, which almost certainly has a diversity of approaches.
One of the benefits — perhaps most of the benefit — of mathematical modeling is that it forces a precision that is often easier to avoid in purely verbal arguments. This precision in certain contexts allows one to make deductions about the behavior of the model that goes beyond what is intuitive, for better or worse — most of the time it will ultimately drag intuition along with it. Certainly if you want a computer to simulate your model, it needs to be precise enough for the computer to simulate it. Further, any model in which forces are counteracting each other in interesting ways is going to have to be quantitative to some degree to be useful; if you want to know what effect some shock will have on the price of a good, and the shock increases both supply and demand for the good, if you don't have enough detail in your model to know which effect is bigger, you can't even tell whether the price will go up or down.
I think there are basically four problems one encounters with the mathematization, or perhaps four facets of a single problem; all of them are to varying degrees potential problems with verbal arguments as well, but they seem to affect mathematical arguments differently.
- It is tempting to use a model that is easy to understand rather than one that is correct.
- All models are wrong, but some models are useful; if you're studying something interesting, it is probably too complex to understand in full detail, and an economic model that requires predicting who is going to buy a donut on a given day is doomed to failure on both fronts. The goal in producing a model (mathematical or otherwise) is to capture the important effects without making the model more complicated than necessary to do so. There are sometimes cases in which models that are too complex are put forth, but the cost there tends to be obvious; a model that is too complex won't shed much light on the phenomena being studied. The other error — leaving out details that are important but hard to include in your model — is more problematic, in that it can leave you with a model that can be understood and invites you to believe it tells you more about the real world than it really does.
- It can be ambiguous how models line up with reality.
- I've discussed here before the shortcomings of GDP as a welfare measure, and have elsewhere a fuller discussion of related measures of production, welfare, and economic activity; most macroeconomic models will have only a single variable that represents something like "aggregate output", and when people try to compare their models to data they almost always identify that as "GDP", which is almost always, I think, wrong; one of the proximate triggers for this post was a discussion of an inflation model that made this identification where Net Domestic Product was probably the better measure, and if you're comparing data from the seventies to data today — before and after much "fixed investment" became computers and software instead of longer duration assets — one isn't necessarily a particularly good proxy for the other. Similarly, models will tend to have "an interest rate", "an inflation rate", etc., and it's not clear whether you should use T-bills, LIBOR, or even seasoned corporate bonds for the former or whether you should use the PCE price index, the GDP deflator, or something else for the latter.
- One can write models that leave out important considerations.
- One of the principles of good argumentation — designed to seek the truth rather than score points — is that one should address one's opponents' main counterarguments. This is as true for mathematical arguments as for verbal ones. I occasionally see a paper on some topic that is the subject of active public policy debate in which the author says, "to answer the question, we built a model and evaluated the impact of different policies," and the model simply excludes the factor at the heart of the arguments of one of the two camps. Any useful model is a simplification of reality, but a useful model will necessarily include any factors that are important, and an argument (mathematical or verbal) that ignores a major counterargument (mathematical or verbal) should not be taken to be convincing.
- Initiates and noninitiates, for different reasons, may give models excessive credence.
- People who don't deeply understand models sometimes accept models that make their presenters look smart. I like to think that most of the people who produce mathematical models understand their limitations, but there is certainly a tendency in certain cases for people who have a way of understanding the world to lean too heavily on it, and there is a real tendency in academia in particular for people who have extensively studied some narrow set of phenomena to think of themselves as experts on far broader matters.
Ultimately, it may be that the best argument against excessive math in economics is that is has sometimes crowded out other ways of thinking; having some mathematical papers that are related to economics is a good thing, but if papers that do mathematics that is far removed from economics are displacing economic arguments that are hard to put in mathematical terms, then the discipline has moved well past the optimum, which almost certainly has a diversity of approaches.
Sunday, September 13, 2015
truncated proportional representation
It's been a while since I've had a voting systems post. I'm going to propose a voting system for a small panel of people that will attempt to give voice to a somewhat broad range of opinions, but also allows blocks of voters to exclude candidates whom they really dislike. The original hope was that this would lead to something of a consensus panel, though that probably really depends a bit on what sort of electorate you have; in some of my generic frameworks it leads to representation that is somewhat uniform but with extremes cut off; the relative distribution is probably typically smoother than that, with centrists disproportionately elected but even somewhat extreme characters occasionally elected, but this seems like a useful concise name for the time being. This will be a dynamic voting system, which is to say it is to be used in an environment in which it is practical to allow voters to vote, for results to be tabulated, and for voters to change their votes. Unlike my previous such system, this one may actually invalidate some previously valid votes along the way, such that voters are to a greater degree "forced" to change their votes.
To begin, allow each voter to vote for, against, or neutral on each candidate; each candidate accrues +5 for each vote in favor and -4 for each vote against. This is generically strategically identical to approval voting, where voters (with probability 1 for certain assumptions) will never be neutral on a candidate. However, after some period of time, a maximum number of votes for/against is imposed; at each point in time the maximum number of votes in favor of candidates is equal to the maximum number of votes against candidates, and that maximum is gradually reduced from infinite to 1. Votes for more than one candidate or against more than one candidate will be dropped at some point before the final vote, but may help voters to coordinate on preferred candidates in the meantime. At the end the panel consists of the top net vote recipients.
Note that one could well jump to the final vote; if an environment makes dynamic systems impractical but has other means of disseminating information, especially strategic information, the earlier phases may be less useful than impractical. The dynamic mechanism is intended to increase the likelihood of convergence to a good equilibrium.
For the models I tend to use, if there are a lot more candidates than positions, many of those candidates will converge toward zero in both votes for and votes against. Some smaller number of candidates, still typically bigger than the size of the ultimate panel by at least one candidate, will remain "relevant". These candidates will tend to include a number of centrists receiving relatively few votes in favor but even fewer votes against, with a disproportionately smaller number of candidates who are more polarizing, with more votes in favor and more votes opposed.
To begin, allow each voter to vote for, against, or neutral on each candidate; each candidate accrues +5 for each vote in favor and -4 for each vote against. This is generically strategically identical to approval voting, where voters (with probability 1 for certain assumptions) will never be neutral on a candidate. However, after some period of time, a maximum number of votes for/against is imposed; at each point in time the maximum number of votes in favor of candidates is equal to the maximum number of votes against candidates, and that maximum is gradually reduced from infinite to 1. Votes for more than one candidate or against more than one candidate will be dropped at some point before the final vote, but may help voters to coordinate on preferred candidates in the meantime. At the end the panel consists of the top net vote recipients.
Note that one could well jump to the final vote; if an environment makes dynamic systems impractical but has other means of disseminating information, especially strategic information, the earlier phases may be less useful than impractical. The dynamic mechanism is intended to increase the likelihood of convergence to a good equilibrium.
For the models I tend to use, if there are a lot more candidates than positions, many of those candidates will converge toward zero in both votes for and votes against. Some smaller number of candidates, still typically bigger than the size of the ultimate panel by at least one candidate, will remain "relevant". These candidates will tend to include a number of centrists receiving relatively few votes in favor but even fewer votes against, with a disproportionately smaller number of candidates who are more polarizing, with more votes in favor and more votes opposed.
Subscribe to:
Posts (Atom)