Friday, December 11, 2015

non-recourse unsecured debt

This is one of those ideas that is not at all well thought-out and is probably a bad idea, but is here because it struck me as interesting when it popped into my head and maybe it can inspire a better idea.

The Obama administration (I believe) has implemented an income-based repayment program for federal student loans; even if you have a lot of debt and low income, you don't have to pay more than 10% of your income toward the loans.  Student loans are special in some ways; most notoriously, to some extent on the premise that they're secured by your education which can't be repossessed, they can't generally be discharged in bankruptcy.[1]  In practice (and I assume from a formal legal standpoint) it's unsecured debt, but the ability of the lender to come after your assets if you aren't paying on the original official schedule has been curtailed.

Now, one of the nasty things about the design of our welfare systems, though it's much improved from a generation ago, is the speed at which benefits sometimes "phase out" as income goes up. Under AFDC, from 1935–1996, if you made an extra $3000 in a year, your benefits were cut by at least $2000, and for most of those 61 years your benefits were actually cut by the full $3000; there was no particular point in gaining work experience or making similar investments that might help you ultimately get out of poverty.  The successor program to AFDC is much more variable from state to state, but phase-out rates are (I believe) universally lower than 67%, usually no higher than 50%.  SNAP, however, phases out at a 30% rate, which might not seem too bad, but this means that if you earn an extra $100 and are on both programs, you may lose $50 in TANF benefits and $30 in SNAP benefits.  Some programs, like Medicaid, are even worse, where if you're $1 below the eligibility threshold, making an extra $2 can cost you your health care; Obamacare subsidies, depending on the circumstances, have a similar structure, where they will decrease gradually until you get to a certain point, but drop discontinuously to zero at a particular threshold.  These health insurance cliffs both seem like bad program design, but often even reasonably designed phase-out rules become problematic when they're phasing out together.  I've suggested that one be able to elect to split 50/50 with the IRS any portion of one's income in exchange for having it officially removed from income for all tax and welfare benefit calculations; this would provide something of a safety valve where, if one found oneself in an income range with 80% in aggregate phaseouts or just above the Medicaid cutoff, one could get 30% of the extra earned dollars back or pay a small amount to get your health insurance back.  More importantly, one would be able to take on extra work without worrying that one is risking eligibility for these programs.

My thought, then, is that perhaps there are other debts we would want to treat similarly to student loan debt, but we might want to lump together with it.  Debtors with those kinds of debt could pay 60% off the top to be at a lower income level for purposes of taxes, benefits, and also income-based debt payment plans, with $10 of each $60 earmarked for creditors, but perhaps more than that if the debtor/taxpayer is not in an extensive phaseout income range.  The debts could include student loans, fines, and judgments from courts, including back child support; in all cases, debts that one wants to see paid, but with the understanding that someone with little reason to work isn't going to be paying them off.


[1]I feel better about this rule in Chapter 7 than Chapter 13, where I would at least be inclined to allow them to be reduced, but that's a bit off topic.

Thursday, December 10, 2015

tax incidence

I've been thinking a bit about "value added" taxes, which are a substantial source of revenue in many European countries, but not in the United States.  The "value added" of a company is essentially the revenue it takes in minus the expenses it pays to other companies; equivalently, it is the profits of the company plus the money it spends paying its employees. [1]  The total production of the economy is then the sum of the "value added" by different economic units.  If markets are competitive, the price of a good is the cost of producing it, and a 20% value added tax translates ultimately to a 25% increase in the final price[2] above the other costs of production; for example, if XYZ corp sells widgets for $3 a piece, with $1 in capital costs (including depreciation), $1 in labor costs, and $1 for inputs purchased from ABC corp, and ABC has no suppliers,[3] then (if supply is inelastic) a 20% VAT tax raises the final price to $3.75, of which $2.50 is value added and $1.25 is paid to ABC; XYZ thus pays 50 cents per widget in taxes, and ABC pays 25 cents per widget in taxes.  It is hoped that the reader will see (or trust) that the result is similar when supply is elastic.

Because of the argument given, the VAT is typically viewed as equivalent to a consumption tax; it gets collected along the supply chain, but is equivalent to, in this case, a 25% tax applied at the end.  At the risk of being wrong — and, note, that is well in the spirit of this blog — it seems to me that a 20% tax on corporate profits combined with a 20% flat income tax on the workers is also equivalent.[4]  If corporations are paying a 36% tax and workers are paying a 20% tax, replacing that with a 20% corporate tax and a 20% value added tax with no income tax seems likely to be equivalent.[5]

There is always, with tax policy, the question of true economic incidence of taxes, which especially in the long-run is likely to be independent of legal incidence; one of the reasons a lot of people like corporate taxes, and a lot of wonks don't, is that it is officially paid by companies, and it's not entirely clear who the actual payers are. (Some quick searching pops up this 2005 working paper on the subject; my recollection is that recent research is unable to exclude, insofar as the question is well-defined, the proposition that about one third of it is borne each by the shareholders or owners of the company, the employees of the company, and the customers of the company, though I don't have a source for that; it probably would depend on the industry, and of course over the entire economy consumers and employees and shareholders are not remotely mutually exclusive groups.)  Possibly because of my American bias, or the fact that I'm just not in that sort of literature generally, I haven't to my recollection seen much analysis of how much of a value-added tax actually falls on consumers, ultimately, and how much is absorbed by someone else.


[1]As with many economic concepts, it gets rough around the edges, so that the first definition I gave is not entirely "equivalent" to the second.  The money a company pays to a company providing its employees health insurance wouldn't be subtracted from "value added"; that's part of paying your employees, only in-kind.  Is the money spent on air conditioning for your office an expenditure on externally-produced inputs or an implicit labor cost?  A reasonable argument could be made for the latter, but I'm sure it's never treated that way.  Whatever the definition of "value added", the tax base for the value added tax is typically closer to the former definition than the latter, so that a company pays "value added tax" on exactly that portion of its revenue on which no other company is paying "value added tax".

[2]The age old confusion about percentages rears its head here; the upshot is that the tax is 20% of the cost with taxes and is 25% of the cost without taxes, so e.g. an item that ends up costing $5 includes $1 in taxes and $4 of "other".

[3]To keep things simple.

[4]Here's where perhaps it's worth emphasizing again that in practice the two definitions given previously for "value added" are effectively very similar.

[5]On the corporate side, $1 before taxes becomes .8×.8=.64 cents after taxes, just as without the VAT.

Sunday, December 6, 2015

endowments, public commitment, and coordination

I somewhat avoid the news here, but the news hook here is pretty tangential; Mark Zuckerberg recently announced that he's giving away most of his fortune to an LLC, not a tax-deductible organization.  One of the reasons for that is the 5% rule; most tax-deductible organizations, as part of the terms of their tax classification, have to spend at least 5% of their endowments every year on expenses that are fairly directly relevant to their official charitable purpose.  I don't know whether non-profit universities are typically incorporated differently or are given an explicit exemption, but they are generally exempt; there has been some talk, at the periphery of American political discussion, of removing the exemption.  In all cases, the idea is that an organization that gets some kind of special exemption from tax laws shouldn't be allowed to simply stockpile and invest an ever increasing endowment without substantial ongoing evidence that it is serving a socially beneficial purpose.

Why do universities (and other organizations) build up endowments in the first place?  Imagine two universities, Typical University which establishes an endowment early in its existence and uses some of it over time, in addition to ongoing donations, tuition, etc., to pay for programming, and Paygo University, which happens to receive in donations each year an amount equal to what TU receives in donations plus what TU withdraws from its endowment; each then funds the same programming.  Presumably they do the same social good; the endowment that TU has at any given time is the present value of the amount of donations TU received in excess of those PU received in the past, where the discount rate is the rate-of-return on the endowment.  If TU is simply letting its endowment pile up, then it has received more in donations, while doing less good, but presumably, one hopes, has enhanced its ability to do good in the future; if TU is pulling substantial funds out of its endowment to fund programs, then there is some sense in which it has not so much taken in more donations than PU, but took them in sooner, perhaps largely taking them in at its foundation while taking in less than PU ever since.

The obvious (at least to me) reason to build up a foundation is to smooth variations in both fund-raising and expenditure; often new buildings are accompanied by special fund-raising campaigns, but there will be times (e.g. capital expenditures) when cash-flow expenses are lumpy and times (recessions, or simply random fluctuation) when contributions are lower than usual, and it makes some sense to have an endowment to smooth that out.  Even ignoring the special capital-spending campaigns (and naming rights that are often a part of that), though, this isn't nearly enough to explain the endowments at most large universities.  If they are smoothing over time and saving for precautionary reasons, they are smoothing over generations and protecting themselves from cataclysms.

It may well be that (especially large) endowments are better at investing money than the donors are, in which case it might make sense to accumulate a large endowment for that reason — the donors can be encouraged to give sooner than they would naturally, perhaps discounting at a lower private discount rate than the university's discount rate — and I neglect that possibility, except for this sentence, not because I think it's unlikely to be important but because I don't think it's as interesting as my other idea.  The other idea, though, is that smoothing over generations and cataclysms avoids coordination failures in which the various participants in a university community — donors, students, professors, and quite possibly others — worry that the university could run into trouble in several years, and thereby avoid it, initially to a small degree, but then, as the prophesy begins to fulfill itself, to an ever greater degree.  A large endowment forestalls that possibility in some ways and serves as a coordination device in others; the number itself makes the university look not just sturdy but reputable.  I wonder, in fact, whether university endowments might be an example of the "overhoarding of liquidity" that Tirole has mentioned as a theoretical possibility that is probably of little practical importance in the settings in which we think of it in those terms.

Friday, December 4, 2015

market safety

It is moderately well-known — Arrow's impossibility theorem is better known, but the Gibbard-Satterthwaite theorem is probably more apposite — that there's no ideal way to aggregate preferences into a jointly optimal outcome, so we're left making tradeoffs of different features when we design systems for coordinating group decisions, such as voting systems and market systems.  One real-world criterion that isn't even part of the impossibility results is "simplicity", partly because that can be hard to formally define; still, it is certainly the case that people process information in certain ways that work better if they find a system to be simple and intuitive than if they don't.  One of the practical consequences of this is that the revelation principle, even if useful for theoretical understanding of constraints, is in some practical sense not something that can be put into practice; the revelation principle says that the best possible aggregation system is in practice equivalent to some "strategy-proof" system, wherein each agent reports all of its private information and the mechanism is such that it is incentive-compatible for them to do so, but in practice even developing the information to report is too complex for realistic agents, and the resulting direct mechanism is often unintuitive to laymen in certain ways in order to understand the constraint.

A good example, perhaps, is the Myerson-Satterthwaite (same Satterthwaite) result for two agents trying to trade an object.  One of them owns it, and has a value of it between $10 and $20, and the other places on it a value between $10 and $20 as well.  As far as I and the buyer know, the seller's value is uniformly distributed in that range, and as far as the seller and I know, the buyer's value is uniformly distributed in that range, but the buyer and seller each know their own valuations.  How do I design an "efficient" mechanism — determining, as a function of the private values, whether and at what price the buyer buys the object?  "Efficiency" here is just measured as the private value of the agent who ends up owning the object, and I'd like to simply give it to whoever has a higher value, but because the price at which it trades would have to be a generally increasing function of the reported values, the buyer will tend to understate the value (and the seller would tend to overstate it) unless doing so substantially reduces the probability of a profitable trade.  They find a fairly generally applicable rule, even when distributions aren't uniform, and it's a bit complicated, though also elucidating; what's relevant for my purposes now, though, is that with uniform distributions it turns out to be equivalent to the nonfatuous Bayes-Nash equilibrium of the mechanism "each side states a price and, if the buyer's price is higher, trade at the midpoint."  It is not the case, in this latter mechanism, that each agent's stated price will be equal to the private value — the buyer will certainly shade low and the seller will shade high — but strategically sophisticated traders will buy in exactly the same circumstances as in the direct mechanism, and for the same prices.

Realistic agents may not be strategically sophisticated, but it's hard to tell which direction that cuts; there are human subject experiments (Kagel, Harstad, and Levin (1987): "Information Impact and Allocation Rules in Auctions with Affiliated Private Values: A Laboratory Study," Econometrica, 55(6): 1275–1304) in which subjects seem to find it harder to simply report their own value, even when they are given it, than to do the shading they're used to doing in small bilateral trade situations, and that's when they have been given their value — in the real world, agents asking themselves "how much is this worth to me?" are surely less likely to find it easy to give the right number. They aren't used to this task; they're used to (at a supermarket) deciding whether they are willing to trade at a given price or (at a bazaar, e.g. at an arts or crafts fair) to making a conservative bid.  In a lot of these situations one side or the other may gain an advantage from being better informed or more strategically sophisticated, but the gains tend to be small and not to too badly impair the interests of people who are toward the low end in information or sophistication.

Some simple mechanisms, though, do not have this property.  I've noted that my biggest problem with the Borda count is not that the best strategy isn't to list candidates in order of preference — just as I don't think you're "lying" if you offer to pay $10 for an item for which you would willingly pay $20 — but that even if all of the agents in a Borda count vote are unrealistically well-informed about strategic information, near equilibrium, if one candidate's voters are somewhat more informed than others, that candidate will generally win — essentially without regard to the candidate's popularity.  Systems like approval voting might require some strategic awareness, but once most agents are somewhat aware of what other agents' preferences are, being a lot more knowledgeable than the others only helps under exceptional circumstances.  Often it is, in fact, reasonable to expect the agents generally to be somewhat more aware of each others' preferences than the mechanism designer is, or can reasonably take into account; for example, if there are three candidates, one of whom is the last choice of 90% of voters, the Condorcet winner is likely to win a first-past-the-post vote, while an informed mechanism designer might find it awkward to publicly and formally declare the irrelevant candidate to be irrelevant.  This is a situation in which the mechanism works, and does so in part by letting voters use strategic information that the designer cannot use in a more direct fashion.

What triggered this post, though, was the concept of "spoofing" in the financial markets, and whether or not spoofing is bad. My first visceral response is that, if some agents are making inferences from the public bids and offers of other agents, it's on them if the information content of those bids and offers is other than what they think it is — even if it's other than what they think it is by the design of the people placing those bids and offers.  Let the market seek its strategic equilibrium.  With markets, perhaps the best analysis is to figure out whether this impedes the functions of the market — moving risky assets to their highest-value owners, with information discovery as part of the process of doing that — and that end may well be better served by a rule against spoofing that is nebulous around the edges but, in practice, is often not that hard to discern.  One other criterion to consider, though, is whether the strategic equilibrium that the market would find in the absence of such a rule is one in which agents would find it profitable to devote a lot of resources to gaining strategic information (as opposed to fundamental information), which, in the voting context, I consider to be one of the very most important considerations in evaluating a system.

Tuesday, November 10, 2015

policy of money as a unit of account

A lot of my conclusions here aren't much different from those of a previous discussion, but I'm going to frame/derive them slightly differently.

Suppose two agents are exogenously matched and given an exogenous date in the future for which they can construct a bilateral derivative; maybe we even require zero NPV, or maybe we allow for some cash transfer now, but either way if they're infinitely clever (and assuming e.g. that they don't anticipate future opportunities to insure before that date, etc.) then I believe the negotiations should leave the ratio of marginal costs of utility for the two agents at that date pre-visible. If we add constraints we modify that, but probably in relatively intuitive ways; if we can only condition on certain algebras of events (coarser than what would in principle be measurable at the final date), for example, then there's an expected value on each of those events that should give the same ratio, and if in some states an agent is unlikely to be able to make a payment, that agent is allowed to be better off than the other agent in that state relative to the usual ratio. Further, if there are a bunch of pairs of agents doing this, and the agents can be put into large classes, but need then to have very similar contracts, I'm probably doing more averaging over agents in each class.

I don't know whether this gets me any closer to an answer, but perhaps this is a useful framework for thinking about monetary policy as the medium of exchange consists increasingly of electronic (even interest-bearing) accounts and centrally-managed money is mostly about the unit of account. Buyers and sellers and debtors and lenders still referencing a given unit of account will tend to have certain risk similarities intraclass and differences interclass that one can try to optimize; if a surprise causes borrowers more pain than lenders, I try to weaken the unit of account, and if a surprise causes sellers more pain than buyers[1], then I try to strengthen the unit of account, and everyone ex ante looks at this and says "doing my contract (legal and explicit or customary and implicit) in this unit of account affords me a certain amount of insurance".


[1] A further note on the inclusion of "buyers" and "sellers" here: on some level this only matters for forward contracts, i.e. if we're entering an agreement to an immediate transaction there's none of this sort of uncertainty that resolves itself between the creation of the contract and its conclusion. Parties to a forward contract take on a lot of the properties of borrowers and lenders, insofar as there is a (say) dollar-denominated transfer in the future to which they've committed. Further, in principle borrowers, lenders, and parties to forward contracts can, as above, create their own risk-sharing contract. As a practical matter, of course, this is likely to be impossible to do perfectly, and it's likely that the extent to which it can be done practically leaves a lot of room for a central bank to come in and improve things. This is a usual theory-meets-practice kind of dynamic, especially in monetary theory; somewhat famously, perfect Walrasian economies don't need money, so a useful theory of money will have to figure out what parts of reality outside of Walrasian economics matters, and incomplete contracts would seem to be a biggie.

I believe, though, that more important than difficulties in contracting formally are informal contract-like substances that result from various incompletenesses in information. Buyers and sellers form long-term relationships that may be "at will" for each party, but are formed typically because one or both parties would incur some expense in looking anew for a counterparty each time a similar transaction was to take place. It seems likely to me that this would result in similar long-term dynamics to a contract, and is likely to involve prices that are sticky in some agreed-upon unit of account, whereupon a benevolent manager of that unit of account would again be trying to optimize as discussed above.

Wednesday, November 4, 2015

LQRE and quasi-strict equilibrium

A not-terribly-standard but conceivably interesting refinement of Nash equilibrium is "the rational limit of logistic quantal response equilibria".  Last night I had myself convinced that it was related to quasi-strict equilibrium; if there is a relationship between the two, though, it's complicated. The simplest precise conjectures that I don't know to be false are (1) any quasi-strict NE has the same payoff as some RLLQRE and (2) any connected set of quasi-strict NE includes a RLLQRE.  It may also be that there is a tighter relationship in 2 player games than more generally; in particular, every game has an RLLQRE, and every 2-player game has a quasi-strict NE, but not every larger game has a quasi-strict NE; it wouldn't surprise me if every RLLQRE in two-player games is quasi-strict, but that obviously can't be true in games that lack a quasi-strict equilibrium.

Monday, November 2, 2015

supply,demand, and fundamental value of assets

There is a sort of investor (who typically regards Warren Buffett as his idol) who insists that the true value of a security is the amount of cash can be expected to spin off, with some appropriate discounting of cash flows that are far off or uncertain.  When deciding whether to buy a stock, the thing to do is figure out this value, and then to buy if the price is much lower than that value, and to sell if it is above that value, and simply stay away if it is in between.  Short-term movements in the price are noise, and should simply be ignored.

There are other investors, more typical of hedge funds or (especially before Dodd-Frank) proprietary trading desks on Wall Street, who do worry about the short-term movements; "The long term is just a series of short terms," they might say, and the more prudent of them will repeat Keynes's dictum, "The market can remain irrational longer than you can remain solvent."  "The price of a stock is determined by supply and demand" is not a view against which it is easy to argue.

It seems as though something "fundamental" ought to enter the demand for a stock at at least some level, but it also seems as though new shelf offerings by companies or even the expirations of lock-up periods should reduce the equilibrium price of the stock.  How do these reconcile?

Modern asset pricing has in some ways moved closer to traditional economics, and in particular the idea that trade should (to some extent) be driven by mutual gains from trade; while many people view financial markets as zero-sum, even in the short-run that is only true ex post; different market participants may have different risk preferences, whether that means more or less risk-averse or exposed to one set of risks instead of another, and at least one socially valuable service of markets is to allow people who own a stock and find that current information implies that it is likely to be correlated with other risks they have to be paired up with people who don't own it who find that its expected return compensates them for any marginal risk it would give to their portfolios.  The value of the asset in this model is in fact the sum of the cash flows it will yield, with appropriate discounts for cash flows that are far out or uncertain — but the appropriate "discounting" depends on each agent's own risk preferences and exposures.  Even if everyone takes a Buffett-style "fundamental" approach, not everyone will agree on which stocks are "overpriced" and which are "cheap"; the various stocks, in equilibrium, will migrate to the people who are best equipped to handle the associated risks, and away from those who are least able.

If a particular stock has a risk-profile that is very different from that of any other stock — if it is not strongly correlated with any other stock, and hedges a risk for which no other stock provides a good hedge — then the agents whose risks have the lowest (signed) correlation with the stock are likely to find it uniquely valuable.  If a stock is fairly strongly correlated with many others available, those will effectively be substitutes (in the demand-theory sense of the word) for it; the demand curve for any particular stock, with the others at fixed prices, will be less elastic; the price of the stock should be relatively insensitive to its own supply.  It seems reasonable to me to think that, with realistic microstructure and transaction cost assumptions, stocks (or just about anything, really) will tend to be better substitutes for each other in the longer-run than in the shorter-run; in the short run, various frictions will more likely gum up the ability of agents to substitute between stocks, so that an exogenous, uninformative increase in supply of a stock will cause the price to drop in the short run (relative to the other stocks) and then tend back toward its previous parity, just compensating the marginal new buyer for the related transaction costs (/ liquidity provision).

Addendum: Related to this is the topic of stock buybacks. Recently some politicians have implied that stock buybacks are a sign of focus on the short-term at the expense of the long-run; if the price of the stock is its fundamental value, though, buying back stock is equivalent to issuing a dividend, and will reduce stock prices relative to what they would have been if the money had instead gone into useful investment. I think the model people have in mind is on some level not dissimilar to the one I've presented here, if perhaps less fleshed out: buying pressure lifts stock prices as people are slow to diversify out of it into similar investments, but ultimately the long-term shareholders are worse off than if real investments had been made instead.

Thursday, October 22, 2015

security lending and market structure

There is a very active market for "repo" loans, where "repo" is short for "repurchase agreement", and what are understood to be collateralized loans are structured as sales with an agreement to repurchase. If I have a bond worth $1020, and I sell it to you for $1000 today while simultaneously agreeing to buy it back from you at $1001 in one month, I am effectively borrowing $1000 at an interest rate of .1% per month; if I go bankrupt or otherwise fail to pay you back, you have the bond, and even if the value of the bond has dropped, as long as it drops less than about 2%, you're not losing any money. Similarly, you may have cash that you want to put somewhere safe with a little bit of interest for a short period of time, and go looking for that transaction; if you're initiating it, perhaps you have to lend a bit more (perhaps more than $1020) against the bond to protect the other party against your default, or perhaps you find someone looking to borrow and willing to overcollateralize the loan as usual. The terms are somewhat malleable.

Another reason for participating in the repo market is that, rather than trying to move cash onto or off of your balance sheet, you have a bond that you want to move onto or off of your balance sheet; instead of looking to borrow or lend cash, you want to borrow or lend the bond, which is just the flip-side of the same transaction. It may be possible for you to safely invest money at a (slightly) higher interest rate than you can borrow in the general repo market, but it is more often the case that particular bonds will enable you to borrow at an even cheaper rate, possibly even negative — you sell the bond for $1000 with an agreement to repurchase it for $999 a month from now. When this is the case, it is generally because some other market participants want to borrow that particular security, and are willing to pay a premium to do so. If general interest rates are .1% per month, a repurchase agreement at -.1% per month is perhaps best viewed as a loan of the security, secured by cash, with a $2 fee for borrowing the bond for a month. In fact, it is likely that you would take the $1000 and turn around and lend it back into the repo market for that .1% per month rate, on net swapping a "special" security for a "general" one on your balance sheet now, but with agreements in place to let you swap them back in a month while clearing $2 in the process.

The reason one would want to borrow a security (and would be willing to pay a lending fee — or, in some sense, to forego interest on money one is lending out — in order to do so) is sometimes that one has contractual obligations for some reason to deliver a particular security to some other party (as, for example, if I had written a call option on the bond, and it has been exercised), but most often (I believe) it is because the person looking to borrow it expects it to go down in price (or is afraid it will, and is looking to hedge the risk).  In this case, after you (formally) sell them the bond, they will sell it again; if you sell it at $1000 with an agreement to buy it back at $999 a month from now, and they sell it for $1000, but can buy it back at $998 a month from now, they clear a profit on the trade.  (Again, this may be intended to offset a loss that they expect to incur somewhere else in the case that the price does go down; whether they're making an affirmative bet on a drop in the price or hedging an existing risk doesn't make much difference here.)  In this case it seems to me that it would be more natural not to buy the bond in the first place; what they really want is the agreement to sell the bond for $999 in one month, and this is the easiest way to do that.

Up to this point I've been using a pretty plain loan structure for purposes of illustration, but repo loans (and other security loans) are often done differently.  A typical example would be that we agree on an interest rate for the loan, but not on a fixed maturity; it goes until one of us decides to terminate it, subject to a reasonable notification period, after which we close it out.  The "forward contract" now looks a little bit stranger as a forward contract, but not especially strange: the price at which it will be executed changes over time, quite possibly in a linear fashion (e.g. by 3 cents per day), and (as with the loan) will be executed at some point in the future, after one of us notifies the other that we wish to close it out.  If we decide that a haircut is appropriate, one of us may post collateral (e.g. $20) with the other.

I'm curious as to why there isn't a developed market for these agreements absent the repo market.  The answer that seems most likely to me is that the repo market serves the money lending and borrowing function that it serves, and is quite liquid; one market for lending and borrowing and another for forward contracts with a separation between the two would presumably make both markets less liquid than they can be if they're combined.  Another possibility is related to the fact that if I borrow a security from you so that I can sell it, I'm not selling it back to you; the net result to me is the same as if I managed to enter into a forward contract with you, but I'm also intermediating the sale of the bond from you to someone else. The economic exposures are similar to what would result if I had entered the forward agreement with the someone else instead of with you — we could just cut you out of this — but I wonder whether there are important reasons why people looking to buy a bond want to buy the bond, rather than enter a forward contract to buy it, while people who want to hold a bond are willing to enter such a forward contract while simultaneously selling it. A possible reason for this would be related to custodial practices; in particular, retail investors may not be able to enter such forward agreements, but their brokers may be able to "lend out" securities held on their behalf (subject to various safeguards), so that the party that is institutionally capable of entering the buy side of the forward contract is also institutionally required to maintain a net zero exposure while doing so.

coordination and carbon prices

Some applied game theorists write in Nature that an international agreement to an effective price of carbon emissions would be more likely to work out than an attempt at an international cap-and-trade scheme, and their arguments seem basically right to me.  In at least some sense, though, I think we could do better.

Suppose, since this seems to be the discussion being had at the moment, that we can treat each country as a rational agent, and suppose each country knows the effective price in each other country. Let wi denote more or less the size of a country normalized so that ∑i wi = 1 and suppose each country wants to maximize Ui = aP-bP2-pi where P=∑ wipi ; this at least captures the idea that each country would like to minimize its own price but (in the relevant range) wants the world price to be higher, but subject to diminishing returns. If we each agree to set our price at c+dP, where 0<d<1, then if country i tries to cheat by δ, that reduces P and thus reduces other countries' prices as well, resulting in an overall decrease of δ/(1-d); its own cost is awiδ/(1-d)-2bPwiδ/(1-d)-δ, which is positive if (a-2bP)wi>(1-d). If you're trying to optimize ∑wi Ui and you set d=1-wi, then country i will want to comply with the optimal price as long as everyone else does.  Different countries will have different wi, and for reasons made clear in the article you really couldn't give different countries different values of d and expect it to work out well — you'd have the same problems currently encountered with the cap-and-trade negotiations.  1/3, however, is a reasonable maximum value, which suggests that setting d to at least 2/3 would substantially reduce the incentive to cheat.

I suspect two potential problems with my scheme, especially vis-a-vis theirs: 1) mine is a bit more complicated, which they note often empirically impedes agreement and coordination, and 2) mine may be more susceptible to problems with monitoring; countries would have a stronger incentive in my scheme than in theirs (depending on what enforcement mechanisms they envision) to make the rest of the world think their price is higher than it really is.

Friday, October 16, 2015

rational agents in a dynamic game

I've mentioned this before, but I'll repeat from the beginning: Consider a game in which I pick North or South, you pick Up or Down, and then I pick East or West, with each of us seeing the other's actions before choosing our own.  If I pick South, I get 30 and you get 0, regardless of our other actions.  If I pick North and you pick Down, you get 40 and I get 10 regardless of our other actions.  If we play North and Up, then I get 50 and you get 20 if I play West, and I get 30 and you get 60 if I play East.
I playYou playI getYou get
North / EastUp3060
North / WestUp5020
NorthDown1040
South300
We each have access to all of this information before playing, so you can see what will happen; if I play North and you play Up, I will play West, which gives you 20, while if I play North and you play Down, you get 40.  We therefore know that you will play Down, so I get 10 if I play North, and I get 30 if I play South, so we can work out that I will play South.

This, however, leads to a bit of a contradiction.  "If I play North and you play Up, then I will play West" entails some behavioral assumptions that, while very compelling, seem to be violated if I play North.  If I play North, regardless of what you play, your assumptions about my behavior have been violated; it is, in fact, a bit hard to reason about what will happen if I play North.  If I'm just bad at reasoning, you should probably play Down.  If you're mistaken about the true payoffs — perhaps the numbers I've listed above are dollars, and my actual preferences place some value on your payoff as well — then it might make sense to play Up, depending on what you think my actual payoffs might be.  Perhaps I'm mistaken about your payoffs (in which case you should probably choose Down).

In mechanism design, it is important to distinguish between an "outcome" and a "strategy profile" insofar as leaves on different parts of the decision tree may give the same outcome, but in the approach to game theory that does not separate those, you don't gain much from allowing for irrational behavior; given any sequence of behavior, you can choose payoffs for the resulting outcome that make that behavior rational.  The easiest way to handle this problem philosophically, then, is to treat it as being embedded in a game of incomplete information, in which agents are all rational but not quite sure about others' payoffs (or possibly even their own).  In the approach to game theory that I've been trying to take lately, though, where we look at probability distributions of actions and types where agents may have direct beliefs about other agents' actions, "rationality" becomes a constraint that is satisfied at certain points in the action/type space and not at others, and it's just as easy to suppose players are "almost rational" as that they are "almost certain" of the payoffs.  I wonder whether this would be useful; it might clean up some results in global games, by which I mostly mean results related to Carlsson and van Damme (1993): "Global Games and Equilibrium Selection," Econometrica, 61(5): 989–1018.

liquidity and stores of value

It has sometimes been asserted that money is something of an embarrassment for the economic profession; a lot of the older models especially tend to assume perfect markets (or, more typically, markets that are only imperfect in one or two ways of particular interest at a time), and perfect markets have no need for a unit of account or a medium of exchange.  One of the first models that attempted to explain money, then, was Samuelson's 1958 paper in which interest rates without money were negative, such that money provided a store of value that gave a better return than other stores of value.

I've never really appreciated this, because the idea seems wrong; there are a lot of other assets that typically (before the last 7 years) have higher returns than money that seem better in every way except for liquidity.  Surely the reason money has social value is that it provides a medium of exchange; in particular, money can be exchanged more readily than the other assets for other needed goods and services at a moment's notice.

On some level, though, this is a false distinction, or at least one that in practice is blurred.  A treasury bill maturing in three months is a great store of value for storing value from now until three months from now; it's not quite as good for storing value from now until one month from now.  Insofar as prices are stable, a dollar is a good way of storing value between now and whenever you want.  Insofar as you can sell a treasury bill for pretty much what you paid for it a month from now, it does a pretty good job, though; the market liquidity of a treasury bill makes it almost as good as cash.  A three month (non-tradable) CD is much less suitable.

If, conditional on the event that you need to make a purchase a month from now, the price at which you can sell an asset is correlated with the price of the good, that asset might actually be a better store of indeterminate-period value than dollars are.  If the correlation is weak or negative, assuming you're risk averse, it's less suitable.  If it's likely that, conditional on a sudden desire for cash, the price at which you can sell is likely to be low, it does a poor job as a tool for precautionary saving — regardless of whether the price at which it could be purchased has fallen as well.  As has been previously noted, you don't so much care, when buying a financial asset, whether the bid-offer spread will be tight when you want to sell, just what the bid per se is likely to be.

A point I've been making in some forms in various venues for a while is that the value of a store of value is affected by who the other owners and potential owners of the asset (or even its close substitutes) are; if a particular asset looks like a good store of value to a certain subset of the population, it may become a poor store of value for that subset of the population if that subset is characterized by a similar set of exposures to liquidity shocks.  If all people who have a last name starting with J face a liquidity risk that would otherwise be well hedged by the possession of a store of beanie babies that could be sold, that could well lead a large number of beanie babies to be owned by people with a last name starting with J.  If the risk materializes, we're all trying to sell our beanie babies at the same time.  If people acquire assets without considering who else owns what, this sort of "fire sale" risk develops naturally for any liquidity event that is likely to affect a substantial portion of the economy while leaving another substantial portion of the economy unscathed; some set of assets that are otherwise well-suited to protecting against the risk become concentrated where they are likely to result in a fire sale.  If the rest of the population is able and inclined to step in and buy, this problem may not be insuperable, but for most reasonable market structure models it's likely to create at least some hit, and if the asset is inherently less attractive to people whose names don't start with J than people who do, their willingness to step in may be minimal.

This, though, is as true of dollars as it is of beanie babies; possibly more so.  Dollars are only valuable insofar as someone else is willing to trade something for them when you need it.  If the residents of a particular country are all trying to spend down savings at the same time, they may find that they drive down the value of their currency to an extent commensurate with their tendency to save in assets denominated in that currency.  It is commonly suggested that Americans don't put enough of their retirement savings abroad; especially for Americans in large age cohorts, this is effectively one of the reasons to diversify globally rather than only investing domestically.

Friday, October 9, 2015

mechanism design and voting

"Voting systems" are mechanisms, but we also design mechanisms for situations that aren't usefully construed as voting; in terms of practically used mechanisms, I'm thinking especially of auctions and other allocation and matching mechanisms.  Typically these allocation mechanisms try to optimize the outcome in some sense; where centralized matching mechanisms have replaced decentralized systems, they often serve to overcome coordination problems and result in Pareto-efficient outcomes, for example, but Pareto-efficiency is famously weak, and it has been shown, for example, that different school-choice algorithms are optimal under different circumstances, even using a single ex ante expected social welfare criterion.

In the literature, there is typically a natural or convenient social welfare criterion, but in many real-life contexts, different people have different ideas about the "right" social welfare criterion — which brings us back to voting mechanisms.  Insofar as people vote on the basis of ideas at all, people vote primarily on the basis of their conception of the "social good", and only to a much smaller extent, if at all, on "self-interest".[1]  One might therefore imagine a two-step procedure in which some mechanism elicits from people their conception of "justice" or "social welfare" in the first step and then asks them for their personal preferences as to their own allocation in the second step, using a mechanism tuned to maximize the criterion selected in the first step.

It is generally the case in theory that a single combined mechanism for doing two things will perform better than multiple separate mechanisms; roughly, if you assume agents are strategic, you sometimes have to "buy off" agents to get them to reveal as much information as possible, and if you combine the mechanisms you can "buy off" the agents in one stage with compensation in the other stage, sometimes at a lower overall cost.  There's some level on which it may be useful to think of proportional representation voting schemes themselves in this way; putting aside practical reasons for them related to information-gathering and gaining buy-in from electoral minorities (avoiding e.g. criminal behavior in response to laws perceived to be invalid), one might have a higher-order desire that a committee reflect other people's preferences as well as one's own, even if bills supported by the majority and opposed by the minority are going to be passed under either system, whether by a close vote of proportionally elected representatives or a landslide vote in a chamber dominated by the electoral majority.  I suspect there might be other interesting mechanisms that join what are more clearly separate "What is our consensus social goal in terms of heterogeneous and unknown preferences?" and "What are our different preferences, and what outcome therefore maximizes the socially preferred criterion?" questions even in a purely instrumental kind of set-up.  One caveat to add before posting this, though, is that I expect the strictly "best" theoretical mechanism in this kind of situation to be weird and complex in some important ways, and thus impractical; it might elucidate more practical conjoined mechanisms, but it might turn out that the best approach in practice is to go back and use a two-stage approach in which agents can readily understand each stage.



[1] Interestingly, a lot of people know that they vote primarily on the basis of what is "right" rather than their own self-interest, but believe that most other people, especially their opponents, do not!

Wednesday, September 30, 2015

market manipulation

My most recent post invited, in a couple of places, tangents on "market manipulation".  I decided to make that a separate post.

There has been some recent suggestion (mostly by uninformed parties) that corporate stock repurchases constitute "market manipulation", i.e. an attempt to artificially drive up the price of their stock.  There is, in fact, apparently a safe harbor provision in relevant regulations dating back to 1980 that indicate that a company is presumed not to be manipulating the market in its stock provided that its purchases constitute no more than a particular fraction of the trade volume over a particular period of time, which suggests that the idea is not entirely novel.  Relatedly, over the last ten years there have been complaints that countries, especially China, are "manipulating" the market for their currencies, and there are provisions in various trade agreements that apparently forbid that, apparently also without particularly well defining it.

My visceral response to the China accusation in particular, the first time I heard it, was "of course they're manipulating the market; that's their prerogative", with a bit of surprise that it was considered untoward; under certain circumstances intervention in the foreign exchange market seems like the easiest way to implement monetary policy, and I kind of think the US should have tried buying up assets in Iceland, India, Australia, and New Zealand at various times in the last seven years when the currencies of those countries have had moderately high interest rates.  The purpose here is not to create prices that are incorrect; it's to change the underlying value of the currency. The same is true of stock repurchases, at least classically; repurchase of undervalued shares increases the value per share of the remaining shares, increasing their value, with the price rising as a consequence.

This is basically the distinction I make in these contexts; I have some sympathy for rules against trying to drive the price away from the value, whereas influencing the value of assets is often on some level the essential job of the issuer of those assets.  While "value" is even more poorly defined than "price", this motivation is frequently somewhat better defined, if perhaps hard to witness; if a company releases false information, that will affect the price and not the value, whereas a repurchase, especially one that is announced in advance and performed in a reasonably non-disruptive way, is more likely an attempt to influence the value of the asset, which is well within the range of things the company ought to be doing.

prices and market liquidity

Assets don't have prices; trades have prices, and offers to trade have prices.  When a market has a lot of offers to buy and a lot of offers to sell at very nearly the same price, the asset can be said informally to have that price (with that level of precision), and if trades take place frequently (compared to how often the price changes by an amount on the order of the relevant precision) you can (for most purposes) reasonably cite the most recent trade price as the asset price, but in corner cases it's worth remembering that that is emergent and only partial.

Most "open-ended" mutual funds these days allow you to buy or sell an arbitrary number of shares of the mutual fund from the mutual fund itself, which (respectively) creates or destroys those shares and sells them shortly after the market closing that follows the placing of the order; that is, if you place an order on a weekend before a Monday holiday, your transaction takes place late Tuesday afternoon, at the same time as if you place it early Tuesday afternoon.  The price at which the transaction takes place is the "Net Asset Value" (NAV); the fund calculates the price of all of the assets it owns, divides it by the number of shares outstanding, and buys or sells the shares pro rata.  For large cap stock mutual funds, this works quite smoothly almost always, and it's extraordinary for it to work poorly; the assets have closing auctions on various stock exchanges that tend to be fairly competitive and result in official "closing" prices that are fairly unbiased and accurate predictors of the price at which the asset may next be bought and/or sold (or, indeed, the prices at which they may have been bought or sold earlier that afternoon).  These assets have prices to a sufficient extent to make this rule work.

There has been increasing concern lately about other kinds of funds, which own assets that do not have very well-defined prices, and here's an example of a fund whose client doesn't like how it made up the prices, but I tend to think the procedure was reasonable. The client sold a very large amount of shares back to the fund — the fund would have to sell assets (at the very least to get back to its usual level of cash holdings), and to value some of the assets that were particularly hard to sell, asked a few dealers how much it could sell them for. They got, naturally, a lower price than they would have had to pay to buy them, or the price at which they last traded; this resulted in a lower NAV than one of those higher prices would have.

It seems to be conventional to use "last-traded price" in many contexts where that isn't a particularly unbiased predictor of where the asset can be sold in the future; if bonds have dropped in price over the last couple days, and a mutual fund has a fair number of bonds that haven't traded in that time, the "last-traded price" is an overestimate of any meaningful "price" that the bond has now, and fund holders redeeming at the (inflated) NAV will be overpaid, to the disadvantage of continuing fund-holders. A bank — I think it was Barclays; I should have made a note — has indicated that, for the high-yield bond mutual funds it was looking at, this problem could be solved by letting sellers (i.e. people redeeming their mutual fund holdings) choose between a 2% discount in the price or a 30 day delay in settlement, either compensating the fund (i.e. the other, continuing investors in the fund) for the adverse selection problem or waiting until bond prices have updated.
While this is mostly intended as a solution to the adverse selection problem, it also somewhat mitigates the problem I'm noting here; a fund with 30 days' advanced notice can sell assets more carefully than one trying to raise a lot of cash before 4:30 this afternoon.

I kind of think that funds (especially with illiquid assets) should quote bid-offer spreads that reflect the bid-offer spreads of the underlying assets; if all of the assets have bids that are 95% of their offers, the fund-wide spreads might be somewhat tighter than that, reflecting the fund's discretion in choosing which assets to sell and a level of liquidity support (cash holdings or perhaps a short-term line of credit, if that's allowed); they should probably, as with most active exchanges, depend to some degree on traded volume, so that large orders to sell would face larger discounts, and they should probably allow orders to buy to net off against orders to sell before hitting either the bid or the offer.  What I'm mostly looking at is essentially a closing auction with the fund filling imbalances; as I may have noted before, open-ended funds with fees for orders that create imbalances and closed-ended funds that make markets in their own stocks kind of converge on each other, and that's what seems like the right approach to me.

Thursday, September 24, 2015

shortcomings of mathematical modeling

Over the course of the twentieth century, accelerating in the second half, the discipline of economics became increasingly mathematical, to the chagrin of some people.  I myself think it has gone too far in that direction in many ways, but I feel like some of the critiques aren't exactly on point.

One of the benefits — perhaps most of the benefit — of mathematical modeling is that it forces a precision that is often easier to avoid in purely verbal arguments. This precision in certain contexts allows one to make deductions about the behavior of the model that goes beyond what is intuitive, for better or worse — most of the time it will ultimately drag intuition along with it.  Certainly if you want a computer to simulate your model, it needs to be precise enough for the computer to simulate it.  Further, any model in which forces are counteracting each other in interesting ways is going to have to be quantitative to some degree to be useful; if you want to know what effect some shock will have on the price of a good, and the shock increases both supply and demand for the good, if you don't have enough detail in your model to know which effect is bigger, you can't even tell whether the price will go up or down.

I think there are basically four problems one encounters with the mathematization, or perhaps four facets of a single problem; all of them are to varying degrees potential problems with verbal arguments as well, but they seem to affect mathematical arguments differently.
It is tempting to use a model that is easy to understand rather than one that is correct.
All models are wrong, but some models are useful; if you're studying something interesting, it is probably too complex to understand in full detail, and an economic model that requires predicting who is going to buy a donut on a given day is doomed to failure on both fronts. The goal in producing a model (mathematical or otherwise) is to capture the important effects without making the model more complicated than necessary to do so.  There are sometimes cases in which models that are too complex are put forth, but the cost there tends to be obvious; a model that is too complex won't shed much light on the phenomena being studied.  The other error — leaving out details that are important but hard to include in your model — is more problematic, in that it can leave you with a model that can be understood and invites you to believe it tells you more about the real world than it really does.
It can be ambiguous how models line up with reality.
I've discussed here before the shortcomings of GDP as a welfare measure, and have elsewhere a fuller discussion of related measures of production, welfare, and economic activity; most macroeconomic models will have only a single variable that represents something like "aggregate output", and when people try to compare their models to data they almost always identify that as "GDP", which is almost always, I think, wrong; one of the proximate triggers for this post was a discussion of an inflation model that made this identification where Net Domestic Product was probably the better measure, and if you're comparing data from the seventies to data today — before and after much "fixed investment" became computers and software instead of longer duration assets — one isn't necessarily a particularly good proxy for the other. Similarly, models will tend to have "an interest rate", "an inflation rate", etc., and it's not clear whether you should use T-bills, LIBOR, or even seasoned corporate bonds for the former or whether you should use the PCE price index, the GDP deflator, or something else for the latter.
One can write models that leave out important considerations.
One of the principles of good argumentation — designed to seek the truth rather than score points — is that one should address one's opponents' main counterarguments. This is as true for mathematical arguments as for verbal ones. I occasionally see a paper on some topic that is the subject of active public policy debate in which the author says, "to answer the question, we built a model and evaluated the impact of different policies," and the model simply excludes the factor at the heart of the arguments of one of the two camps. Any useful model is a simplification of reality, but a useful model will necessarily include any factors that are important, and an argument (mathematical or verbal) that ignores a major counterargument (mathematical or verbal) should not be taken to be convincing.
Initiates and noninitiates, for different reasons, may give models excessive credence.
People who don't deeply understand models sometimes accept models that make their presenters look smart. I like to think that most of the people who produce mathematical models understand their limitations, but there is certainly a tendency in certain cases for people who have a way of understanding the world to lean too heavily on it, and there is a real tendency in academia in particular for people who have extensively studied some narrow set of phenomena to think of themselves as experts on far broader matters.
As I noted, these problems can be present to some degree even in non-mathematical arguments; certainly Keynes talked about "interest rates" without always specifying which ones, he made tacit assumptions that were crucial to his predictions and weren't well spelled out, and he seems to have been very confident about predictions that haven't always panned out, all without mathematics.  (To the extent that we learn mathematical "Keynesian" economics in introductory macroeconomics, the math was largely introduced by Hicks a few years later attempting to make Keynes's arguments clearer and more precise.)

Ultimately, it may be that the best argument against excessive math in economics is that is has sometimes crowded out other ways of thinking; having some mathematical papers that are related to economics is a good thing, but if papers that do mathematics that is far removed from economics are displacing economic arguments that are hard to put in mathematical terms, then the discipline has moved well past the optimum, which almost certainly has a diversity of approaches.

Sunday, September 13, 2015

truncated proportional representation

It's been a while since I've had a voting systems post.  I'm going to propose a voting system for a small panel of people that will attempt to give voice to a somewhat broad range of opinions, but also allows blocks of voters to exclude candidates whom they really dislike.  The original hope was that this would lead to something of a consensus panel, though that probably really depends a bit on what sort of electorate you have; in some of my generic frameworks it leads to representation that is somewhat uniform but with extremes cut off; the relative distribution is probably typically smoother than that, with centrists disproportionately elected but even somewhat extreme characters occasionally elected, but this seems like a useful concise name for the time being.  This will be a dynamic voting system, which is to say it is to be used in an environment in which it is practical to allow voters to vote, for results to be tabulated, and for voters to change their votes.  Unlike my previous such system, this one may actually invalidate some previously valid votes along the way, such that voters are to a greater degree "forced" to change their votes.

To begin, allow each voter to vote for, against, or neutral on each candidate; each candidate accrues +5 for each vote in favor and -4 for each vote against.  This is generically strategically identical to approval voting, where voters (with probability 1 for certain assumptions) will never be neutral on a candidate.  However, after some period of time, a maximum number of votes for/against is imposed; at each point in time the maximum number of votes in favor of candidates is equal to the maximum number of votes against candidates, and that maximum is gradually reduced from infinite to 1.  Votes for more than one candidate or against more than one candidate will be dropped at some point before the final vote, but may help voters to coordinate on preferred candidates in the meantime.  At the end the panel consists of the top net vote recipients.

Note that one could well jump to the final vote; if an environment makes dynamic systems impractical but has other means of disseminating information, especially strategic information, the earlier phases may be less useful than impractical.  The dynamic mechanism is intended to increase the likelihood of convergence to a good equilibrium.

For the models I tend to use, if there are a lot more candidates than positions, many of those candidates will converge toward zero in both votes for and votes against.  Some smaller number of candidates, still typically bigger than the size of the ultimate panel by at least one candidate, will remain "relevant".  These candidates will tend to include a number of centrists receiving relatively few votes in favor but even fewer votes against, with a disproportionately smaller number of candidates who are more polarizing, with more votes in favor and more votes opposed.

Thursday, June 11, 2015

bond fund liquidity

There has been a lot of metaphorical ink spilled over the last year or so about liquidity in the bond market; my understanding is that for most bond issues buying and selling a small quantity incurs roughly the same transaction costs as it did ten years ago, or possibly less, but buying and selling a large quantity is a lot harder and more expensive; no single dealer will take your entire trade without a significant price penalty, and even breaking up your trade and going to multiple dealers is probably not going to get you in and out of your position at the sort of spread that was expected several years ago.  The big concern, though, seems to be with tail risk, namely
  1. at some point in some/many/all important issues, the ability of holders to sell will suddenly disappear altogether, perhaps at a point at which many holders would like to be able to sell
  2. bond funds in particular, which are substantially short a kind of "liquidity risk", will find themselves trying to liquidate a lot of bonds in a hurry in response to a spate of redemptions.
These are obviously tied to each other, and the first (more general) concern should distinguish between the closely related phenomena of "everyone trying to sell at the same time" and "suddenly nobody willing to buy". Microstructure models typically distinguish between "informed" sellers and what are often called "liquidity" sellers, and in an idealized world one might hope that a wave of people selling because they all happen to need cash would be met by a lot of people who don't need cash buying without a large price movement, while one might have substantially less expectation that new investors would buy in response to a wave of selling caused by the expectation of a price drop. In reality, insofar as one of the attractions of a "liquid" investment over an "illiquid" one is that it can become cash if cash is suddenly needed, if the expected value of the effective bid conditional on one's needing cash is low, one doesn't so much care whether that's because the market is thin and transaction costs are high or because one's own need for cash is strongly correlated with fundamental risks underlying the asset — that is to say this distinction is probably important to people trying to understand market dynamics, but may not matter to any individual investor.

Let's talk about bond funds, though.  Like banks, they seem to be intermediating uncomfortably between liquid liabilities and (increasingly) illiquid assets, and this works well (and even creates value) as long as the imbalances between inflows and outflows are small and gradual, but seems susceptible to a run; if I think a lot of other people are going to ask for withdrawals, leading to a heavily discounted sale of assets, I may be inclined to get out first.  It's a little bit softer here, and a little bit more like the fire sale than the bank run in that I probably don't get all of the non-run value if I sell near the beginning of a run and won't lose all of my value if I don't, but it's more like the run than the fire sale in that the fund is probably selling assets on the basis of their liquidity rather than their long-term value, and thus real value is being destroyed instead of being largely reallocated to brave long-term buyers.

One of the arcana I remember learning when I was doing Series 7 back in like 2006 or 2007 is that open-ended mutual fund shares can be transferred, though they usually aren't. "Open-ended" mutual funds are the usual kind — you expect to go to them (perhaps through a broker), given them some new money for them to invest, and get some new shares from them — shares that didn't exist before. Later you take your shares back to them, and they figure out how much they're worth, and they give you cash, and the shares disappear. In principle, unless my memory is wrong or this has changed, you could buy or sell the shares from your cousin Fred at whatever price you and he agreed on (though I'm not sure who the relevant transfer agent would be or how the transfer would otherwise be effected).  I don't know whether there is a prohibition on their being listed on an exchange; certainly it's not usually done, but I don't believe it couldn't be done if a mutual fund company thought served some purpose.

A "closed-end" fund is an idea that has become a lot less popular in the last generation or two, and was a bit more like a traditional stock-issuing company in that it would issue stock in an IPO, invest the proceeds, and return some of the earnings to investors as dividends, while investors bought and sold its stock on an exchange; what distinguished it was only that it would "invest the proceeds" in other publicly traded securities rather than a "real business".  I don't know whether it is allowed to buy or sell its own stock in the secondary market, or under what conditions; presumably it could do a shelf offering, or announce a repurchase, and I think I saw something a month or two ago suggesting that a closed ended fund dedicated to gold had some stated policy of buying back its stock if the price fell too far below the underlying value of the gold.  In practice a lot of closed-ended funds trade at substantial discounts to the assets they own, and sometimes they trade above the value of their assets, and a case could be made for their responding to these deviations by liquidating in a strategic manner and buying back stock when it is below their NAV and selling new shares and investing the proceeds when their stock is above NAV.  In both cases one would expect a soft collar, rather than an absolute peg of the price to the underlying assets, that would take account of transaction costs; in particular, a large drop in the price of the fund shares should lead to sales of liquid assets, but not to a fire sale of illiquid assets; the level of desperation with which assets are sold could be tailored to the extent and persistence of the discount of the fund price to its assets value.

An open-ended mutual fund with listed shares could do something similar from sort of the opposite direction; in particular, it could announce that, if it is faced with a lot of redemption requests, it will start throttling them, liquidating in a responsible fashion over days or weeks but not necessarily by 4PM.  Investors who really need to cash out now can sell at some discount in the market, where more patient investors who trust that the fund will eventually liquidate assets at a decent price will buy the funds shares. (I offered similar idea for bank runs a few years ago.)  Similarly, when a sudden flow of investment leads to the fund's being closed to new investment (as sometimes happens), the more bullish bulls can buy at a premium on the market, and the fund can sell new shares as opportunities to deploy the cash become available.

Either or both of these seem likely to mitigate the liquidity mismatch that bond funds face; if you want to look in the details, I suspect that taxes and management fees (and the associated incentives for increasing assets under management) would be fertile sources of devils.  In principle, when one is looking to sell an illiquid asset, one faces a trade-off between the speed at which the asset can be sold and the price at which it can be sold, and that ideally would match the discount rate of the ultimate investor; how to set up the rules to make that most likely to work out in practice is a question to which the answer is likely to require more care than is the bailiwick of this blog.

It does occur to me, since starting this post, that this is related to my attempted dissertation chapter on shortages and market structure, and should probably even be incorporated into it.

Monday, May 18, 2015

strategy and theories of mind

Economists are sometimes criticized for using expected-utility maximizers in their models more because that gives precise (in some sense) predictions than because it matches the way in which individual people typically make decisions.  Economists who use expected-utility maximizers in their models sometimes respond, at least in part, "Well, yeah;" particularly where "framing effects" seem to be important, "behavioral economics" can often give contradictory predictions, depending on how you read the situation, and other times gives no prediction at all; there are many more ways to be irrational than to be rational, and particularly if your criticism ends at "people are sometimes irrational", it is completely useless.

While there is, in fact, at least some consistency to the way in which people deviate from traditional economic models, I think exactly this potential variety in ways to deviate — even ways to deviate just slightly — may well be important to understanding aggregate behavior.  Since at least Kreps et al, (1982)[1], it has been reasonably well-known that a population of rational agents with a very small number of irrational[2] agents embedded in it may behave very differently from a population of which all members are commonly known to be rational. In that paper the rational agents knew exactly how many people were irrational and in what way — they had, in fact, complete behavioral models of everyone, with the small caveat that they didn't know which handful of their compatriots followed the "irrational" model and which followed the "rational" model. In the real world, we have a certain sense that people generally like some things and behave in certain ways and dislike other things and don't behave in other ways, but there is far less information than even in the Kreps paper.

It is interesting, then, that in a lot of lab experiments, the deviations from the predictions of game theory take place "off-path" — that is, a lot of the deviations involve subjects responding rationally to threats that don't make sense. Perhaps the simplest example is the "ultimatum game"; two subjects are offered (say) $20, with the first subject to propose a division ("I get $15, you get $5"), and the second subject to accept or refuse the split — with both subjects walking away empty-handed if the split is refused. This is done in an anonymous way, as essentially a single, isolated interaction; gaining a reputation, in particular, is not a potential reason to refuse the offer. Different experiments in different contexts find somewhat different results, but typically the first subject proposes to keep 3/5 to 2/3 of the pot, and the responder accepts the offer at least 80% of the time. It is certainly the case that respondents will refuse offers of positive amounts of money, especially if the offer is much less than one third of the pot, but the deviation from the game-theoretic equilibrium that is most often observed is that the offeror offers "too much", in response to the (irrational) threat that the respondent will walk away from the money that is offered. This does not require that they be generous or have strong feelings about doing the "right" thing, or that they hold a universally-applicable theory of spite, only (if they are themselves "rational") that they believe that some appreciable fraction of the population is much likelier to reject a smaller (but positive) offer than a larger offer.

Game theoretic agents typically have fairly concrete beliefs about the other agents' goals, and from that typically formulate fairly concrete beliefs about other agents' actions. There may be very stylized situations in which people do that, but I think people typically use heuristics to make decisions, in somewhat more reflective moments make decisions after using heuristics to guess what other agents are likely to do, and only very occasionally circumscribe those heuristics based on somewhat well-formulated ideas of what the other agents actually know or think. The reason people don't generally formulate hierarchies of beliefs is that they aren't useful; a detailed model of what somebody else thinks yet another person believes another person wants is going to be wrong, and is not terribly robust. The heuristics are less fragile, even if they don't always give the right answer either, and provide a generally simpler and more successful set of tools with which to live your life.


[1] Kreps et al, (1982): "Rational Cooperation in the Finitely Repeated Prisoners' Dilemma," Journal of Economic Theory, 27: 245--252

[2] I feel the need to add here that even with their "irrational" agents, it is possible (easy, in fact) to write down a utility function that these agents are maximizing — that is, they are "rational" in the broadest sense in which economists use the term. Economists often do, sometimes in spite of protestations to the contrary, suppose not only that agents are expected utility maximizers but that they dislike "work", they don't intrinsically value their self-identity or their identity to others (though they will value the latter for instrumental reasons), etc. Often, without these further stipulations, mainstream economic models don't give "precise" predictions in the sense I asserted at the beginning of this post; in the broad sense of the term "rational", there may be a smaller set of ways to be rational than to be irrational, but there are a lot of ways to be rational as well, and restricting this set can be important if you want your model to be useful. For this post I mostly mean "rational" in the more narrow sense, but it should be noted that challenges to this sense of "rational" are much less of a threat to large swaths of economic epistemology than the systematic violations of expected utility maximization are.

Thursday, April 23, 2015

heterogeneity and aggregation

This post will have less content than usual, so I'll try to keep it short.  This post is mostly here to jog my memory if I look at it in the future.

One of my interests is in how heterogeneity has important effects on aggregate economic variables that get lost with the "representative agent" framework.  One somewhat-well known example is borrowing constraints; if they bind different agents differently those agents may behave, in aggregate, in a way that is very different from how any single agent might be expected to behave.  There's a lot of literature on the idea that many of the homebuyers driving the recent housing price bubble were, in fact, acting at the time they did in part because they had recently had borrowing constraints eased.  Other agents may, in this sort of model, still play a role in magnifying what might be a small bubble if it were left only to the agents whose borrowing constraint was loosened; agents bid up prices in anticipation of each other.  Depending on the response functions, you may only need a small initial impetus to cause a dramatic change.  (The multiplier may even be locally infinite in some sense where you have a no-longer stable equilibrium.)

Thursday, April 16, 2015

Interpretations of Probability

I've been doing some reading (at Stanford's philosophy portal, among other places) and thinking about the meaning of probability — well, to some large degree on-and-off for at least 15 years, but a bit more "on" in the last month again.  The page I linked to groups concepts into three groups, which they describe as "a quasi-logical concept", "an agent's ... graded belief", and "an objective concept" that I will conflate with one of their examples, the "frequentist" idea.  My own interpretation of these ideas is that they form a nexus around "subjective" and "frequentist" ideas, with the formal mathematical calculus of probability connecting ideas to each other in important ways.  What follows are mostly my own thoughts, though clearly building on the ideas of others; that said, I'm sure there is a lot of relevant philosophical literature that I have never seen, and even much that I have seen that I have not understood the way it was meant.

I'll start by referencing a theorem related to rational agent behavior.  The upshot is that under reasonable assumptions, rational agents behave in such a way as to maximize "expected utility", where by "reasonable" I mean not that anybody behaves that way, but that if you could demonstrate to a reasonably intelligent person that they had not behaved that way, they would tend to agree that they had made a mistake somewhere. "Utility" is some numerical assignment of values to outcomes, and "expected utility" is its "expected value" under some mathematically consistent system of probabilities. The theorem, then, is that if a person's decisions are all in some normatively-appealing sense consistent with each other, there is some way of assigning probabilities and some way of assigning "utility" values such that those decisions maximized expected utility as calculated with those probabilities and utilities.

A related result that gets a lot of use in finance is that if "the market" isn't making any gross mistakes — again, in a normatively-appealing way, but also in a way that seems likely to at least approximately hold in practice — then there is some system of probabilities and payouts such that the price of an asset is the expected value of the discounted future cash flows associated with that asset.  In finance texts it is often emphasized that this system of probabilities — often called the "risk-neutral measure" — need not be the "physical measure", and indeed most practitioners expect that it will put a higher probability on "bad outcomes" than the "physical measure" would.  The "physical measure" here is often spoken of as an objective probability system in a way that perhaps sits closer to the "frequentist" idea, but if the market functions well and is made up mostly of rational agents whose behaviors are governed by similar probability measures, the "physical measure" used in models will tend to be similar to those.  The point I like to make is that the "physical measure", in a lot of applications, turns out not to matter for finance; the risk-neutral measure is all you need.  Further, the risk-neutral measure seems philosophically clearer; it's a way of describing the prices of assets in the market, and, implicitly, even prices of assets that aren't in the market.[1] It should be noted, though, that the "physical measure" is what people prefer for econometrics, so when one is doing financial econometrics one often needs both.

These contexts, in which a set of numbers on possible events has all of the mathematical properties of a probability system but need not correspond tightly to what we think of as "probability", play a role in my thinking.[2]

I think the most common definitions you would get for "probability" from the educated layman would fit into the frequentist school; the "probability" of an event is how often it would occur if you ran the same experiment many times.  Now, the law of large numbers is an inevitable mathematical consequence of just the mathematical axioms of probability; if a "draw" from a distribution has a particular value with an assigned probability, then enough independent draws will, with a probability as close to 1 as you like, give that particular value with a frequency as close to the assigned probability as you like.  If you and I assign different probabilities to the event but use the laws of probability correctly, then if we do the experiment enough times, I will think it "almost impossible" that the observed frequency will be close to your prediction, and you will think it "almost impossible" that it will be close to my prediction.  Unless one of us assigns a probability of 0 or 1, though, any result based on a finite number of repetitions cannot be completely ruled out; inferring that one of us was wrong requires at some point deciding that (say) 1×10-25 is "practically 0" or 1-1×10-25 is "practically 1". For any level of precision you want (but not perfect precision), and for as small a probability (but not actually 0) as you insist before declaring a probability "practically zero", there is some finite sample size that will allow you to "practically" determine the probability with that precision. So this is how I view the "frequentist" interpretation of probability: the laws of probability are augmented by a willingness to act as though events with sufficiently low probability are actually impossible.[3]

More often, my own way of thinking about probabilities is closer to the "subjective" probability; "a probability" is a measure of my uncertain belief, and the tools of probability are a set of tools for managing my ignorance.  It is necessarily a function of the information I do have; if you and I have different information, the "correct" probability to assign to an event will be different for me than for you.[4]  If one of us regularly has more (or more useful) information than the other, then one of us will almost certainly, over the course of many probability assessments, put a higher probability on the series of outcomes that actually occurs than the other will; that is to be expected, insofar as my ignorance as to whether it will rain is in part an ignorance of information that would allow me to make a better forecast.  There is a tie-in here to the frequentist interpretation as I cast it in the previous paragraph, related to Mark Twain's assertion that "history doesn't repeat itself, but it rhymes": not only is it impossible to take an infinite number of independent draws from a distribution, it is impossible to take more than one with any reliability. At least sometimes, however, we may do multiple experiments that are the same as far as we know — that is, we can't tell the difference between them, aside from the result. If we count as a "repetition" those events that looked that same in terms of the information we have[5], then we might have enough "repetitions" to declare that it is "practically impossible" that the probability of an observation, conditional on the known information, lies outside of a particular range.

One last interpretation of probability, though, is on some level not to interpret probability.  (One might call this the "nihilist interpretation".)  A fair amount of the "interpretations of probability" program seems oriented around the idea that whether an event "happens" or not, or whether something is "true" or not, is readily and cleanly understood, and there is some push to get probabilities close to 0 or 1, since we feel like we understand those special cases.  We know, though, that our senses and minds are unreliable; everything we know about the world outside ourselves is with a probability that is, in honesty, strictly between 0 and 1.  As we get close to 0, or close to 1, as a practical matter, the remaining distance will make no practical difference — it can't.  But those parts of the world that are practically described by probabilities are in reality on a continuum with those we can practically treat differently, and consistently follow the laws of mathematics and nature, with 0 and 1 as, at the very best, special cases.



[1] If there are a lot of relevant possible assets that "aren't in the market", the risk-neutral measure may not be unique, i.e. there may be several different systems of probability that are consistent with the mathematical rules of probability and market prices; the conditions for existence are more practically plausible than the conditions required for uniqueness. Sometimes you might wish to a price a hypothetical asset whose price depends on which of the available risk-neutral measures you use, in which case existing prices will not fully guide you.

[2] As is noted at the Stanford link, there is some sense in which mass and volume can be made to behave according to the laws of probability; it is probably important to my philosophical development that the systems of "probability" I give in the text are closer in ineffable spirit to the common idea of "probability" than that.

[3] To some extent I'm restricting my discussion to "discrete" probability distributions to avoid having to talk much about "measurability", and to some extent I have failed here; if you flip a fair coin 100 times, any series of outcomes has a probability of less than 1×10-25. There are 161700 different series that contain 3 heads and 97 tails; if I don't distinguish between any of those 161700 different outcomes, then the probability of that single aggregate "3 heads" event is bigger than 10-25, even though any single way of doing it is not. If I insist on rounding the probability of each possible outcome to 0, then it is certain that an "impossible" outcome will result, but if I say "there are 101 measurable events, one for each possible number of heads," then the probability of an "impossible" outcome is extremely low (in this case, there are 6 such "impossible" outcomes, and they are, taken together, "impossible"). Ultimately you would probably want to take account of how many different events you want to distinguish when you're deciding what threshold you're rounding to 0; if you want to distinguish 1025 different events, then a probability threshold substantially smaller than 10-25 should be used.

[4] In some sense, this is what the Stanford site calls "objective probability", insofar as I'm asserting a "right" and "wrong" notion. What might be a conditional probability from the standpoint of the "objective" probability idea — that is, the probability conditional on the information we know — is what I'm thinking of here as my "prior" probability, along with an assertion that what from the "objective probability" standpoint would be a "prior" probability isn't actually meaningful.

[5] This, too, is basically "measurability", which is perhaps unavoidable in any non-trivial treatment of probability, even with finite "sample spaces".

monetary policy and the theory of money

I have several dollars on top of my dresser, but most of my money (in pretty much any sense in which economists regularly use the word) exists as electronically-recorded liabilities of financial institutions.  For most of my bills, it is more convenient to pay them out of such intangible money than the tangible money. Supposing we can still count the zero-interest-rate environment that has persisted for more than six years "abnormal", we have mostly shifted to a medium of exchange that pays interest, and the trajectory of technology (both information technology and financial technology) is toward more of that.

Traditional explanations of how monetary policy work often run more or less like this: the fed controls short-term interest rates, which affect the trade-off people make between holding their money in more liquid versus less liquid forms, and if they increase the amount they have in more liquid forms they spend more.[1] As the most liquid form of money starts to pay interest at a rate that moves more or less one-to-one with other interest rates, we face something of a paradox; the interest rate is effectively zero in terms of the actual medium of exchange, and the "interest rate" that the fed targets simply measures the rate at which the value of the dollar declines relative to that.

If people at that point are largely using interest-bearing deposits and funds as the actual store of value and medium of exchange in the economy, to what extent does this "dollar" whose value declines relative to it even matter?  At least at first, it can continue to serve as a unit of account.  Indeed, at this point it seems to have retained that function in the United States, even as it has largely lost the others; even where you see contracts with "indexing" of some sort, it's far more often to a price index than to something connected to interest rates per se.  Perhaps over time contracts could start to have future cash flows stipulated in terms of the amount of money that would be in a bank account at that point in time if a specified amount had been deposited at the beginning of the contract, but there's no logical reason why the new medium of exchange would need to take over this last function of money.

Thus the dollar, increasingly, serves only as a unit of account, and will maintain its relevance only if it continues to serve for many purposes as a better unit of account than some alternative.[2] What makes a good unit account is not necessarily entirely the same thing that makes a good store of value or medium of exchange.  To the extent that it does not, this new separation is in fact liberating for the Fed; it can focus on making the dollar a good unit of account, possibly allowing more volatility in its value than would be optimal if it were also a widespread store of value.

A business, for example, will typically have inputs that it purchases as it goes along, but will also require long-term inputs into the production process — a lease on a retail store, for example. (Employees who may, in principle, be freely dischargeable at-will employees, are probably in practice at least somewhat long-term inputs due to firm-specific knowledge and training and the costs of hiring and firing.) It also will produce products that may include short-term sales, longer-term contracts to supply clients, or both. It is likely that there will be some "duration mismatch" between inputs and outputs. In each case where the company is locked in to a decision years ahead of time, it risks a change in circumstances; if it is mostly selling as it goes along, it might wish to respond to an unexpected drop in demand by finding a way to cut production costs, but if it is selling mostly by long-term contract but has to buy its inputs day-to-day, it is subject to an increase in costs that it can't pass along. To the extent that it can specify prices in long-term contracts in terms of a unit of account that will drop in value if demand for its product goes down, or increase in value as competition for its supplies goes up, it will be easier for the company to responsibly engage in this business.  A central bank that is trying to optimize its currency for use as a unit of account, therefore, will tend to devalue its currency when the economy in general is slowing down and increase its value (at least relative to expectations) when the economy is especially robust.  These kinds of fluctuations in the value of actual holdings of the currency — long-term, as a store of value, or even short-term, as a medium of exchange, between the sale of one good or service and the purchase of another — will tend to make it less useful for those purposes.  In a world where the central bank doesn't have to trade off these costs against the benefits of a countercyclical unit of account, it can focus on a better unit of account, while the other functions of money are provided elsewhere.


[1] There are (perhaps more compelling) arguments related to intertemporal substitution as well, but note that those explanations implicate the real interest rate rather than the nominal interest rate. You therefore need a story about how inflation and interest rates are simultaneously determined, and in particular why a decision by the fed to raise interest rates would reduce inflation expectations. These stories and explanations typically themselves come back to a "liquidity effect", so we're left with the same conundrum as the role of non-interest-bearing money atrophies.

[2]To some extent, as long as the government is using it as a unit of account — specifying tax liabilities, contract payments, and social security benefits in dollars, and even taxing the deviation between our new electronic currency and the dollar as "interest income" — it can be kept relevant by fiat.