Tuesday, August 9, 2016

simplified heuristics and Bellman equations

An idea I've probably mentioned is that certain behavioral biases are perhaps simplifications that, on average, at least in the sort of environment in which the human species largely evolved, work very well.  We can write down our von Neumann / Morgenstern / Friedman / Savage axioms and argue that a decision-maker that is not maximizing expected utility (for some utility and some probability measure) is, by its own standards, making mistakes, but the actual optimization, in whatever sense it's theoretically possible with the agent's information, may be very complicated, and simple heuristics may be much more practical, even if they occasionally create some apparent inconsistencies.

Consider a standard dynamic programming (Bellman) style set-up: there's a state space, and the system moves around within the state space, with a transition function specifying how the change in state is affected by the agent's actions; the agent gets a utility that is a function of the state and the agent's action, and a rational agent attempts to choose actions to optimize not just the current utility, but the long-run utility.  Solving the problem typically involves (at least in principle) finding the value function, viz. the long-run utility that is associated with each state; where one action leads to a higher (immediate) utility than the other but favors states that have lower long-run utility, the magnitudes of the effects can be compared.  The value function comprises all the long-run considerations you need to make, and the decision-making process at that point is superficially an entirely myopic one, trying in the short-run to optimize the value function (plus, weighted appropriately, the short-run utility) rather than the utility alone.

A problem that I investigated a couple of years ago, at least in a somewhat simple setting, was whether the reverse problem could be solved: given a value function and a transition rule, can I back out the utility function?  It turns out that, at least subject to certain regularity conditions, the answer is yes, and that it's generally mathematically easier than going in the usual direction.  So here's a project that occurs to me: consider such a problem with a somewhat complex transition rule, and suppose I can work out (at least approximately) the value function, and then I take that value function with a much simpler transition function and try to work out a utility function that gives the same value function with the simpler transition function.  I have a feeling I would tend to reach a contradiction; the demonstration that I can get back the utility function supposed that it was in fact there, and if there is no such utility function I might find that the math raises some objection.  If there is such a utility function that exactly solves the problem, of course, I ought to find it, but there seems to me at least some hope that, even if there isn't, the math along the way will hint how to find a utility function, preferably a simple one, that gives approximately the same value function.  This, then, would suggest that a seemingly goal-directed agent pursuing a comparatively simple goal would behave the same way as the agent pursuing the more complicated goal.

cf. Swinkels and Samuelson (2006): "Information, evolution and utility," Theoretical Economics, 1(1): 119--142, which pursues the idea that a cost in complication in animal design would make it evolutionarily favorable for the animal to be programmed directly to seek caloric food, for example, rather than assess at each occasion whether that's the best way to optimize long-run fecundity.

Wednesday, July 27, 2016

policing police

This is a bit outside the normal bailiwick of this blog, but is the sort of off-the-wall, half-baked idea that seems to fit here at least in that way.

Police work, at least as done in modern America, requires special authority, sometimes including the authority to use force in ways that wouldn't be allowed to a private citizen; sometimes the police make mistakes, and it is important to create systems that reduce the likelihood of that, but allowances also need to be made that they are human beings put in situations where they are likely to believe they lawfully have certain authority; if a police officer arrests an innocent man, the officer will face no legal repercussions, while a private citizen would, even if the private citizen had "reasonable cause" to suspect the victim.  It is appropriate that this leeway be made, at least as for legal repercussions; if a particular police officer shows a pattern of making serious mistakes, even if they are clearly well-intended, it is just common sense[1] that that officer should be directed to more suitable employment, but being an officer trying to carry out the job in good faith should be a legal defense to criminal charges.

That extra authority, though, comes — morally if not legally — with a special duty not to intentionally abuse it.  This is the case not least because the task of police work is much more feasible where the citizens largely trust that an order appearing to come from a police officer is lawful than where they don't.  A police officer in Alabama was reported, not long ago, to have sexually assaulted someone he had detained, and in a situation like that the initial crime is additional to the societal cost of eroding trust people have that the officer is at least trying to be on the side of law.  This erosion of trust is also the primary reason that impersonating a police officer is a serious crime.[2]  I propose, then, upon the showing of mens rea in the commission of a serious crime by a police officer while using that office to facilitate the crime, that the officer be fired retroactively --- and brought up additionally on the impersonation charges.[3]

[1] I mean, it should be.  My impression is that it is too difficult to remove bad cops, but that's not an especially well-informed impression.

[2] Pressed to give secondary reasons, they would also line up pretty well between impersonating an officer and abusing the office.

[3] This policy would have an interesting relationship to the "no true Scotsman fallacy"; no true police officer would intentionally commit a heinous crime, and we'll redefine who was an officer when if we have to to make it true.

Tuesday, July 26, 2016

liquidity and efficiency of goods and services

Years ago, I went to a barber and got a haircut that took no more than five minutes.  I go with simple haircuts, and he had basically run some clippers over my head and used scissors to blend what was left.  At first, I was a bit taken aback, and thought that perhaps I should tip less than usual (and indeed wondered whether I should be charged less than usual altogether), but very quickly realized that this was perverse; the haircut I had received was not, in the context of my preferences, inferior in any way to other haircuts I have received, and I'm better off having the other (say) 15 minutes of my time to (say) squander writing blog posts on the internet.  Ceteris paribus, we both benefit from his having finished more quickly; I left my usual tip, leaving the pecuniary terms of trade unchanged from those in which we both lose more time.

Liquidity, like speed, is a benefit to both the buyer and the seller; both are a bit hard to analyze with supply and demand for this reason.  (My go-to deep neoclassical model, from Arrow-Debreu, treats a quick haircut as a different service from a slow haircut, and as such treats them as different markets, but they are such close substitutes that it's obviously useful to treat them as in some sense "almost" the same market.)  There may well be other ways in which different instances of a good or service differ in ways such that the quality that is better for the buyer is naturally better for the seller as well.  My interest especially is in market liquidity, and I wonder whether distilling out this aspect provides useful models for some of the important phenomenology around that.

Tuesday, July 12, 2016

risk and uncertainty

A century ago, an economist named Frank Knight wrote a book on "Risk and Uncertainty", where by "risk" he meant what economists generally alternate between calling "risk" and "uncertainty" today and by "uncertainty" he meant something economists haven't given as much attention in the past seventy years, but have tended to call "ambiguity" when they do.[1]  The distinction is how well the relevant ignorance can be quantified; a coin toss is "risky", rather than "ambiguous", because we have pretty high confidence that the "right" probability is 50%, while the possibility of a civil war in a developed nation in the next ten years is perhaps better described as "ambiguous".  Here is a link to the wikipedia page on the Ellsberg paradox.  Weather in the next few days would have been "ambiguous" when Knight wrote, but was becoming risky, and is well quantifiable these days.

Perhaps one of the reasons the study of ambiguity fell out of favor, and has largely stayed there for more than half a century since then,[2] is that a strong normative case for the assignment of probabilities to events was developed around World War II; in short, there is a set of appealing assumptions about how a person would behave that imply that they would act so as to maximize "expected utility", where "utility" is a real-valued function of the outcome of the individual's actions and "expected" means some kind of weighted average over possible outcomes.  In perhaps simpler terms, if a reasonably intelligent person who understands the theorem were presented with actions that person had taken that were not all consistent with expected utility maximization, that person would probably say, "Yeah, I must have made a mistake in one of those decisions," though it would probably still be a matter of taste as to which of the decisions was wrong.

To be a bit more concrete, suppose an entrepreneur is deciding whether or not to build a factory.  The factory is likely to be profitable under some scenarios and unprofitable under others, and the entrepreneur will not know for sure which will obtain; if certain risks are likelier than some threshold, though, building the factory will have been a bad idea, and if they're less likely, than it will have been a good idea.  Whether or not the factory is built, then, implies at least a range of probabilities that the entrepreneur must impute to the risks; an entrepreneur making other decisions that are bad for any of those probabilities is making a mistake somewhere, such that changing multiple decisions guarantees a better outcome, though which decision(s) should be changed may still be up for debate (or reasoned assessment).  The rejoinder, then, to the assertion that a probability can't be put on a particular event, is that often probabilities are, at least implicitly, being put on unquantifiable events; it is certainly not necessarily the case that the best way to make those decisions is to start by trying to put probabilities on the risks, but it probably is worth trying to make sure that there is some probabilistic outlook that is consistent with the entire schedule of decisions, and, if there isn't, to consider which decisions are likely to be in error.[3]

There is a class of situations, though, in which something that resembles "ambiguity aversion" makes a lot of sense, and that is being asked to (in some sense) quote a price for a good in the face of adverse selection.  If, half an hour after a horse race, you remark to someone "the favored horse probably won," and she says, "You want to bet?", then, no, you don't.  In general, I should suppose that other people have some information that I don't, and if I expect that they have a lot of information that I don't, then my assessment of the value of an item or the probability of an event may be very different if I condition on some of their information than if I don't; if I set a price at which I'm willing to sell, and can figure out from the fact that someone is willing to buy at that price that I shouldn't have sold at that price, I'm setting the price too low, even if it's higher than I initially think is "correct".

In a lot of contexts in which people seem to be avoiding "ambiguity", this may well fit a model of a certain willingness to accept other's probability assessments; e.g. I'm not willing to bet at any price on a given proposition, because, conditional on others' assessments, my assessment is very close to theirs.

[1] There's a nonzero chance that I have his terms backward, but that nonzero chance is hard to quantify; in any case, the concepts here are what they are, and I'll try to keep my own terminology mostly consistent with itself.

[2] I'm pretty sure Pesendorfer and/or Gul, both of whom I believe are or were at Princeton, have produced some models since the turn of the millennium attempting to model "ambiguity aversion", and I should probably read Stauber (2014): "A framework for robustness to ambiguity of higher-order beliefs," International Journal of Game Theory, 43(3): 525--550.  This isn't quite my field.

[3] In certain institutional settings, certain seemingly unquantifiable events may be very narrowly pinned down; I mostly have in mind events that are tied to options markets.  If a company has access to options markets on a particular event, it is likely that there is a probability above which not buying (more) options is a mistake, and another below which not selling (more) options is a mistake, and those probabilities may well be very close to each other.  If you think you should build a factory, and the options-implied probability suggests you shouldn't, buying the options instead might strictly dominate building the factory; if you think you shouldn't and the market thinks you should, your best plan might be to build the factory and sell the options.

liquidity and coordination

A kind of longish article from three years ago tells the story of a wire-basket maker adapting his company in response to foreign competition.  One of the responses is to serve clientele with specific, urgent needs:
"The German vendor had made this style of product for them for over 20 years," says Greenblatt, "and quoted them four months to make the new version." Marlin said it could do the job in four weeks. And it delivered. "If a car company doing a model-year changeover can get the assembly line going faster, the value of that extra three months of production is enormous," says Greenblatt. "The baskets are paid for in a couple hours."
I've described a "liquidity shock" as a sudden spike in a person's idiosyncratic short-term discount rate: a dollar today is suddenly a lot more valuable than a dollar a month or two from now. In this case, there's an incredibly steep discount rate for a real good: a basket in four weeks is a lot more valuable than a basket in four months.  Drilling in just a bit more, the origin of this is a problem of coordinating different elements of the production process: while it could have been anticipated a year earlier that some kind of basket would be needed, by the time the specifications are available, the other parts of the production plan are being implemented as well, and you need them all (with the same specifications) to come together as quickly as possible.  So here we have something of a liquidity shock created by something of a coordination problem (though neither of those words is being used exactly as I usually use them), combining two of my favorite phenomena.

Tuesday, June 14, 2016

market liquidity and price discovery

Two consequences associated by convention with market liquidity are a liquidity premium (the asset's price is higher than it would be if it didn't have a liquid market) and market efficiency (in many ways, but I have in mind at the moment price discovery — the price is more indicative of the correct "fundamental" price if the market is liquid than if it's not.  I've spent much of my life in the last couple of years formalizing the idea that the liquidity premium can be negative — if something is consumed regularly and can be stored, people may be willing to pay less if they trust their ability to buy it as they need it than if they worry that the market is unreliable — but it's worth noting a way in which market liquidity can also impede price discovery.

In [1], Camerer and Fehr note that a situation with strategic complementarity is more susceptible to irrational behavior than a situation with strategic substitutes; that is, if our action spaces can be ordered such that each of our best responses is higher the higher are others' actions, then I am likely to worry more about other people's actions than if it is lower the higher are others' actions.  For concreteness, consider a symmetric game in which actions are real numbers and my payoff is -(a-λ<a>)2, where <a> denotes the average of everyone's action and λ is some real number; for λ=1, any outcome in which everyone picks the same action is an equilibrium, and I care deeply what other people are doing, while for λ=0 my best response is 0 independent of what others are doing. For λ=0.9, the only equilibrium is one in which everyone chooses 0, but if I think there's a general bias toward positive numbers, I may be better off choosing a positive number — and thereby contributing to that bias. If λ<0, then if I expect a general bias, I'm better off counteracting that bias; even a relatively small number of agents who are somewhat perceptive to the biases of themselves and others will tend to move the group average close to its equilibrium value.

Now consider an asset with a secondary market; in general the value of an asset to a buyer is the value of holding it for the amout of time the buyer plans to hold it, plus the value of being able to sell it at the time and price at which the buyer expects to sell it.  In a highly liquid financial market, especially one in which a lot of the traders expect to hold their asset for a short period of time, an agent deciding whether or not to buy will base the willingness to pay very sensitively on the price at which the asset is expected to be sold some time later.  If the market becomes less liquid, it makes less sense to buy with the intention of holding for a very short period of time; the value of owning the asset is a larger fraction of the total value of buying it.  λ is still positive, but is much less close to 1; I still care what other people will pay for it when I sell, but at least as a relative matter the value it has to me as I hold it is rather more important.  More to the point, though, I expect the seller to whom I sell it to make a similar calculation; the price at which I am able to sell it will be more dependent on what I expect it to be worth to the next owner to own it, and less on what I think the next seller thinks the seller after that will pay for it.  The cycle in which we care more about 15th order beliefs than direct beliefs in fundamentals is more attenuated the harder the asset is to sell.

It's worth noting that James Tobin suggested a tax in the market for foreign exchange for reasons related to this.

Addendum: Scheinkman and Xiong (2003): "Overconfidence and Speculative Bubbles," Journal of Political Economy, 111(6): 1183--1219 seems to be relevant, too.

[1] Camerer and Fehr (2006): "When does ``Economic Man'' Dominate Social Behavior?," Science, 311: 47 – 52

Monday, June 6, 2016

money illusion and dipping into capital

I think that I use the term "money illusion" somewhat differently from how many writers use it, though I think my use is slightly more appropriate to the ordinary use of those words separately and is a more useful and coherent phenomenon.  In either case, the essential point is that a dollar five years from now is different from a dollar now, and that mistakes can be made by decision-makers who assume otherwise.  One of the ways in which this manifests itself is in a maxim against "dipping into capital", which holds that retirees, endowed non-profits, and those people from Jane Austen novels who "had" an income of so-many-pounds per year unconnected from any employment, should only spend the "income" derived from retirement savings / endowment / whereever that money came from, and never sell down the asset.  There are surely circumstances in which that's a good rule of thumb for boundedly rational agents to avoid worse mistakes, but it seems in part to suppose that one thereby has "the same amount" of capital later as one has initially.  In solving a Ramsey problem with a perfectly liquid instrument of savings, however, there is no distinction between "principal" and "interest", and if a risk-free asset pays an interest rate that varies with time, the optimal solution will typically involve selling some of the asset to increase consumption at certain times and buying more of it at other times to save for later; the exact result will depend on other details, but even if the dollar amount of savings tends to return to some constant dollar amount over long period of time, it is rarely optimal to keep it exactly constant all the time.

A related phenomenon is "reaching for yield": when investors, especially bond investors and often in denial, view the interest rates available on safe investments as insufficient, they may become more inclined to buy the bonds of riskier companies, which tend naturally to pay higher rates of interest, until, of course, they don't.[1]  While this is often done by investors who just seemingly can't really believe that interest rates are as low as they are, and feel entitled to the interest rates that prevailed in the Carter administration, sometimes the people who do this are people drawing a line between "capital" and "income", and looking to turn a higher portion of the expected return into a form that their rules of thumb will allow them to spend.  They would often, perhaps generally, be better off buying a bond with a 3% yield and selling off 1% of their holdings each quarter[2] than buying a bond with a 7% yield so that they don't have to sell it.

This post is loosely triggered by a badly flawed column at wsj.com yesterday; a somewhat more coherent version of its argument, though, is that an environment of low interest rates encourages income investors to buy stocks with higher dividend yields[3] and thereby reduces investment as companies use cash to raise their dividends instead of spending it on research and development.[4]  One of my favored models of a liquidity shock, especially when thinking about things intuitively, is as a suddenly high private discount rate on cash; I suddenly need dollars very urgently, such that putting them off until next month or even perhaps next week is very costly to me in some sense, so that I'll take $1000 now instead of $1100 next week or $1300 next month; in particular, if I have an asset that I think is, in some longer-run sense, worth more than $1100, I might find liquidating it in a hurry at $1000 to be better than whatever consequences would befall me if I spend time trying to find a better price.  More generally, I've treated the ability to sell an asset as giving it some "just in case" value; it's more attractive than an otherwise similar illiquid asset because I don't know whether there will be a liquidity shock.  The ability to sell all at once if necessary, though, and the ability to sell a bit at a time according to a previously anticipated schedule, are likely at least to be closely related, if not, in presence of systemic events, to be exactly the same.

Let's incorporate the "don't dip into capital" mentality by supposing that, even if assets could be sold easily for cash, that the owner won't sell them; they are, if you will, practically illiquid because of the owner's behavioral biases, even though the market exists.  With the realistic incomplete markets, the marginal discount rate of a serviceable amount of cash for most people most of the time is likely to track general interest rates reasonably well; somewhere between what you get from relatively safe savings and the rate at which you could borrow if you needed to.[5]  If it weren't in that range, you'd presumably borrow or save more or less than you do.  If you're intent on putting most of your money into assets that you refuse to sell, though, the private discount rate and the general market rates have considerable room to diverge, and the conditional optimization problem requires that you discount future cash flows at your own, private discount rate — which, if the cash flows tend to be very long term, is likely to be somewhat high.[6]  If dividend yields are low, presumably because the markets think the values of stocks lie more in their payouts in the distant future than their payouts in the next year or two, then if most of your income is from stocks that you refuse to sell, your private valuation of a stock that you're considering buying (and, we suppose, holding forever) should be driven by expected dividends discounted at that private rate — thus valuing higher dividend yields at a higher fraction of their market value than low dividend yields.  The same would apply to bonds; this, then, may be a serviceable "reaching for yield" kind of model to use where we think we have agents of that sort we want to incorporate into our financial/economic model.

[1] Until 30 years ago, the extra interest rates that risky companies paid on their bonds actually did more than make up for the risk of default by a sizeable margin, and a diversified portfolio of such bonds that could absorb losses on a few bonds would more than make up for it in the extra income payments.  That gap generally tightened in the 1980s, and, while it has still been generally positive in the last 25 years, it's more of a strategy that generally gets a reasonable premium for a risk, not a reliable way to secure a high return.  That said, of course some high-yield bonds will in fact make all of their scheduled payments without default, but it shouldn't be forgotten that there's a reason they're trading with a higher yield to maturity than the safer portions of the bond market.

[2] I mean, this is false, and in an important way; corporate bonds are much more expensive to buy and sell than large-cap stocks, for example, and I'll note later in passing how illiquidity might justify the behavior to some degree.  The right approach is probably to buy the 3% bond, but not with all of your money; keep some of your money in shorter-term instruments that mature as you'll need the cash.

[3] Note that here the caveat in the previous footnote has much less force; you can sell down stock holdings somewhat gradually with relatively little in the way of transaction costs.

[4] A bit off topic, but a quick list of problems with the column: 1) it notes that companies are buying back their shares, which runs exactly counter to the idea that they're turning long-run share value into income; holders have to sell their shares to receive the payouts; 2) the corporate sector as a whole is holding a lot of cash on balance sheets right now, even after giving some of it to shareholders; a failure to engage in R&D is not plausibly due to a shortage of cash caused by shareholder payouts; 3) those low interest rates also enable most companies to borrow money cheaply, financing either payouts to shareholders or R&D or both that way, and it isn't reasonable to imagine that higher payouts to confused shareholders are anywhere near the scale needed to cancel out that effect.

[5] This range may be big compared to some things, especially in the short term, but on the scale of years will be at least reasonably well defined.

[6] E.g. if you "have an income" that is safe and expected to grow for decades, you might be motivated to smooth your consumption in time, spending more now and less later, rather than living on beans now and in luxury later, even if you had to pay a somewhat high interest rate in order to borrow to move that consumption sooner.