Thursday, December 22, 2016

local optimization

Among the methodological similarities between physics and economics is the frequent use of optimization techniques; in fact, both disciplines often involve optimizing something they call a "Lagrangian", though the meaning of that word is rather different in the two subjects!  In both cases, though, there's some sense in which what is frequently sought is a partial optimum, rather than a full global optimum.

Suppose you have a marble in a bowl on a table, and you want to figure out where it goes.  Roughly speaking, you expect it to seek to lower its potential energy.  Usually, though, it will go toward the middle of the bowl, even though it would get a lower potential energy by jumping out of the bowl onto the floor.  Quantum systems tend to "do better" at finding a global minimum than classical systems; liquid helium, in its superfluid state, will actually climb out of bowls to find lower potential energy states.  Even quantum systems, though, often end up more or less in states where the first-order conditions are satisfied, rather than the actual global maximization problem.  This is perhaps most elegantly achieved with path-integrals; you associate a quantum mechanical amplitude with each point in your state space, make it undulate as a function of the optimand, and integrate it, and where the optimand isn't constant it cancels itself out, leaving only the effect of its piling up where the optimand satisfies the first-order conditions.

In economics and game theory, "equilibrium" will typically maximize agents' utility functions, each subject to variation only in the corresponding agent's choice variables; externalities are, somewhat famously, left out.  I'm tempted to try to apply a path-integral technique, but in game theory in particular the optimum is often at a "corner solution" where a constraint binds, and where the optimand doesn't therefore satisfy usual first-order conditions.  Something complicated with lagrange multipliers might be a possibility, but I suspect the use of (misleadingly named) "smoothed utility functions" will effectively do the same thing, but more easily.  I might then try to integrate "near" an equilibrium, but only in the dimensions corresponding to one particular agent's choice variables.

I wonder whether I can make something useful of that.

Wednesday, December 14, 2016

dividing systems

This will be a bit different, and may well not be terribly original, but I want to think about some epistemological issues (perhaps with some practical values) associated with dividing up complex systems into parts.

In particular, suppose a complex system is parameterized by a (large) set of coordinates, and the equations of motion are such as to minimize an expression in the coordinates and their first time derivatives as is typical of Lagrangians in physics; I'll simply refer to it as the Lagrangian going forward, though in some contexts it might be a thermodynamic potential or a utility function or the like.  A good division amounts to separating the coordinates into three subsets, say A, B, and C, where A and C at least are nonempty and the (disjoint) union of the three is the full set of coordinates of the system.  Given values of the coordinates (and their derivatives) in A and B, we can calculate the optimal value of the Lagrangian by optimizing over values that C can take, and similarly given B and C I can optimize over A, and I get effective Lagrangians for (A and B) and (B and C) respectively.  Where this works best, though, is where the optimizing coordinates in C (in the first case) or A (in the second) depend only on the coordinates in B; conditional on B, A and C are independent of each other.  This works even better if B is fairly small compared to both A and C, and might in that case even be quite useful if the conditional independence is only approximate.

In general there will be many ways to write the Lagrangian as LA+LB+LC+LAB+LBC+LAC+LABC, with each term depending only on coordinates in that set, but it will be particularly useful to write a Lagrangian this way if the last two terms are small or zero. If we are "optimizing out" the C coordinates, the effective Lagrangian for A and B is LA+LB+LAB plus what we get from optimizing LC+LBC+LAC+LABC; this will depend only on B if the terms with both A and C are absent.  Thus on some level a good decomposition of a system is one in which the Lagrangian can be written as LAB+LB+LBC, where I've absorbed previous LA and LC terms into the first and last terms; for given evolutions of B variables, the A variables will optimize LAB and the C variables will optimize LBC and these two optimizations can be done separately.

Wednesday, September 28, 2016

short sales

Matt Levine this morning writes (ctrl-F "blockchain") about what short sales would look like on a blockchain, and it's pretty straightforwardly correct; you lift the process we have now with all the sales taking place the way they do on a blockchain and get some of the additional transparency that comes with it. Fungibility on the blockchain is a bit less than it is without that transparency; one of the things being addressed specifically in his passage is that right now, if people "own" 110% of the outstanding shares of an issue, nobody knows whether their shares are among the 10% that in some sense don't count.

One of the things that's highlighted here, though, is that the short-sale concept is perhaps not what you would create if you were designing the market system top-down from whole cloth:
Just transfer 10 shares from B to A, in exchange for a smart contract to return them, and then sell those shares from A to C over the blockchain. Easy as blockchain. C now owns the shares on the blockchain's ledger, while A also "owns" them in the sense that she has a recorded claim to get them back from B.
This is how short-selling works; if A wants to sell short, A borrows the stock from someone (B) and then sells it to someone else (C).  If you introduce brokers, the way our current system works, the actual beneficial owner B won't even know that the shares have been lent out; both B and C think they own the shares.  The big change the blockchain makes is that, at least in principle, B can see that the "shares" B owns are actually an agreement by A to deliver them in the future.

There's some sense in which the borrow and sale are superfluous, though; the promise to (re-)deliver in the future is what you're trying to create by doing a short sale.  What you would think, from first principles in the absence of market structure concerns, would be the way to get there is let C buy the shares from B while A enters a forward contract with B, or, if C is just as happy to be on the receiving end of a forward contract, leave B out of it altogether and have a forward contract from A to deliver shares to C.  There are exchanges for stocks, and a less centralized market for lending securities, and these grew up (one and then the other) to facilitate short sales; in our current world, then, it's hard (especially for retail customers) to enter bilateral forward contracts, and the institutions for effecting the same result are set up to facilitate it in a somewhat baroque manner.  If you're moving to blockchain for settlement, and need to change the structure of the market to accommodate that, then
A blockchain would need to do something similar: let some people create new securities on the blockchain, but carefully control who gets that access.
doesn't seem to me like my first choice approach; what would make more sense to me would be a market in which buyers see offers to enter into forward contracts as well, and where the borrow gets left out altogether.

Tuesday, August 9, 2016

simplified heuristics and Bellman equations

An idea I've probably mentioned is that certain behavioral biases are perhaps simplifications that, on average, at least in the sort of environment in which the human species largely evolved, work very well.  We can write down our von Neumann / Morgenstern / Friedman / Savage axioms and argue that a decision-maker that is not maximizing expected utility (for some utility and some probability measure) is, by its own standards, making mistakes, but the actual optimization, in whatever sense it's theoretically possible with the agent's information, may be very complicated, and simple heuristics may be much more practical, even if they occasionally create some apparent inconsistencies.

Consider a standard dynamic programming (Bellman) style set-up: there's a state space, and the system moves around within the state space, with a transition function specifying how the change in state is affected by the agent's actions; the agent gets a utility that is a function of the state and the agent's action, and a rational agent attempts to choose actions to optimize not just the current utility, but the long-run utility.  Solving the problem typically involves (at least in principle) finding the value function, viz. the long-run utility that is associated with each state; where one action leads to a higher (immediate) utility than the other but favors states that have lower long-run utility, the magnitudes of the effects can be compared.  The value function comprises all the long-run considerations you need to make, and the decision-making process at that point is superficially an entirely myopic one, trying in the short-run to optimize the value function (plus, weighted appropriately, the short-run utility) rather than the utility alone.

A problem that I investigated a couple of years ago, at least in a somewhat simple setting, was whether the reverse problem could be solved: given a value function and a transition rule, can I back out the utility function?  It turns out that, at least subject to certain regularity conditions, the answer is yes, and that it's generally mathematically easier than going in the usual direction.  So here's a project that occurs to me: consider such a problem with a somewhat complex transition rule, and suppose I can work out (at least approximately) the value function, and then I take that value function with a much simpler transition function and try to work out a utility function that gives the same value function with the simpler transition function.  I have a feeling I would tend to reach a contradiction; the demonstration that I can get back the utility function supposed that it was in fact there, and if there is no such utility function I might find that the math raises some objection.  If there is such a utility function that exactly solves the problem, of course, I ought to find it, but there seems to me at least some hope that, even if there isn't, the math along the way will hint how to find a utility function, preferably a simple one, that gives approximately the same value function.  This, then, would suggest that a seemingly goal-directed agent pursuing a comparatively simple goal would behave the same way as the agent pursuing the more complicated goal.

cf. Swinkels and Samuelson (2006): "Information, evolution and utility," Theoretical Economics, 1(1): 119--142, which pursues the idea that a cost in complication in animal design would make it evolutionarily favorable for the animal to be programmed directly to seek caloric food, for example, rather than assess at each occasion whether that's the best way to optimize long-run fecundity.

Wednesday, July 27, 2016

policing police

This is a bit outside the normal bailiwick of this blog, but is the sort of off-the-wall, half-baked idea that seems to fit here at least in that way.

Police work, at least as done in modern America, requires special authority, sometimes including the authority to use force in ways that wouldn't be allowed to a private citizen; sometimes the police make mistakes, and it is important to create systems that reduce the likelihood of that, but allowances also need to be made that they are human beings put in situations where they are likely to believe they lawfully have certain authority; if a police officer arrests an innocent man, the officer will face no legal repercussions, while a private citizen would, even if the private citizen had "reasonable cause" to suspect the victim.  It is appropriate that this leeway be made, at least as for legal repercussions; if a particular police officer shows a pattern of making serious mistakes, even if they are clearly well-intended, it is just common sense[1] that that officer should be directed to more suitable employment, but being an officer trying to carry out the job in good faith should be a legal defense to criminal charges.

That extra authority, though, comes — morally if not legally — with a special duty not to intentionally abuse it.  This is the case not least because the task of police work is much more feasible where the citizens largely trust that an order appearing to come from a police officer is lawful than where they don't.  A police officer in Alabama was reported, not long ago, to have sexually assaulted someone he had detained, and in a situation like that the initial crime is additional to the societal cost of eroding trust people have that the officer is at least trying to be on the side of law.  This erosion of trust is also the primary reason that impersonating a police officer is a serious crime.[2]  I propose, then, upon the showing of mens rea in the commission of a serious crime by a police officer while using that office to facilitate the crime, that the officer be fired retroactively --- and brought up additionally on the impersonation charges.[3]

[1] I mean, it should be.  My impression is that it is too difficult to remove bad cops, but that's not an especially well-informed impression.

[2] Pressed to give secondary reasons, they would also line up pretty well between impersonating an officer and abusing the office.

[3] This policy would have an interesting relationship to the "no true Scotsman fallacy"; no true police officer would intentionally commit a heinous crime, and we'll redefine who was an officer when if we have to to make it true.

Tuesday, July 26, 2016

liquidity and efficiency of goods and services

Years ago, I went to a barber and got a haircut that took no more than five minutes.  I go with simple haircuts, and he had basically run some clippers over my head and used scissors to blend what was left.  At first, I was a bit taken aback, and thought that perhaps I should tip less than usual (and indeed wondered whether I should be charged less than usual altogether), but very quickly realized that this was perverse; the haircut I had received was not, in the context of my preferences, inferior in any way to other haircuts I have received, and I'm better off having the other (say) 15 minutes of my time to (say) squander writing blog posts on the internet.  Ceteris paribus, we both benefit from his having finished more quickly; I left my usual tip, leaving the pecuniary terms of trade unchanged from those in which we both lose more time.

Liquidity, like speed, is a benefit to both the buyer and the seller; both are a bit hard to analyze with supply and demand for this reason.  (My go-to deep neoclassical model, from Arrow-Debreu, treats a quick haircut as a different service from a slow haircut, and as such treats them as different markets, but they are such close substitutes that it's obviously useful to treat them as in some sense "almost" the same market.)  There may well be other ways in which different instances of a good or service differ in ways such that the quality that is better for the buyer is naturally better for the seller as well.  My interest especially is in market liquidity, and I wonder whether distilling out this aspect provides useful models for some of the important phenomenology around that.

Tuesday, July 12, 2016

risk and uncertainty

A century ago, an economist named Frank Knight wrote a book on "Risk and Uncertainty", where by "risk" he meant what economists generally alternate between calling "risk" and "uncertainty" today and by "uncertainty" he meant something economists haven't given as much attention in the past seventy years, but have tended to call "ambiguity" when they do.[1]  The distinction is how well the relevant ignorance can be quantified; a coin toss is "risky", rather than "ambiguous", because we have pretty high confidence that the "right" probability is 50%, while the possibility of a civil war in a developed nation in the next ten years is perhaps better described as "ambiguous".  Here is a link to the wikipedia page on the Ellsberg paradox.  Weather in the next few days would have been "ambiguous" when Knight wrote, but was becoming risky, and is well quantifiable these days.

Perhaps one of the reasons the study of ambiguity fell out of favor, and has largely stayed there for more than half a century since then,[2] is that a strong normative case for the assignment of probabilities to events was developed around World War II; in short, there is a set of appealing assumptions about how a person would behave that imply that they would act so as to maximize "expected utility", where "utility" is a real-valued function of the outcome of the individual's actions and "expected" means some kind of weighted average over possible outcomes.  In perhaps simpler terms, if a reasonably intelligent person who understands the theorem were presented with actions that person had taken that were not all consistent with expected utility maximization, that person would probably say, "Yeah, I must have made a mistake in one of those decisions," though it would probably still be a matter of taste as to which of the decisions was wrong.

To be a bit more concrete, suppose an entrepreneur is deciding whether or not to build a factory.  The factory is likely to be profitable under some scenarios and unprofitable under others, and the entrepreneur will not know for sure which will obtain; if certain risks are likelier than some threshold, though, building the factory will have been a bad idea, and if they're less likely, than it will have been a good idea.  Whether or not the factory is built, then, implies at least a range of probabilities that the entrepreneur must impute to the risks; an entrepreneur making other decisions that are bad for any of those probabilities is making a mistake somewhere, such that changing multiple decisions guarantees a better outcome, though which decision(s) should be changed may still be up for debate (or reasoned assessment).  The rejoinder, then, to the assertion that a probability can't be put on a particular event, is that often probabilities are, at least implicitly, being put on unquantifiable events; it is certainly not necessarily the case that the best way to make those decisions is to start by trying to put probabilities on the risks, but it probably is worth trying to make sure that there is some probabilistic outlook that is consistent with the entire schedule of decisions, and, if there isn't, to consider which decisions are likely to be in error.[3]

There is a class of situations, though, in which something that resembles "ambiguity aversion" makes a lot of sense, and that is being asked to (in some sense) quote a price for a good in the face of adverse selection.  If, half an hour after a horse race, you remark to someone "the favored horse probably won," and she says, "You want to bet?", then, no, you don't.  In general, I should suppose that other people have some information that I don't, and if I expect that they have a lot of information that I don't, then my assessment of the value of an item or the probability of an event may be very different if I condition on some of their information than if I don't; if I set a price at which I'm willing to sell, and can figure out from the fact that someone is willing to buy at that price that I shouldn't have sold at that price, I'm setting the price too low, even if it's higher than I initially think is "correct".

In a lot of contexts in which people seem to be avoiding "ambiguity", this may well fit a model of a certain willingness to accept other's probability assessments; e.g. I'm not willing to bet at any price on a given proposition, because, conditional on others' assessments, my assessment is very close to theirs.

[1] There's a nonzero chance that I have his terms backward, but that nonzero chance is hard to quantify; in any case, the concepts here are what they are, and I'll try to keep my own terminology mostly consistent with itself.

[2] I'm pretty sure Pesendorfer and/or Gul, both of whom I believe are or were at Princeton, have produced some models since the turn of the millennium attempting to model "ambiguity aversion", and I should probably read Stauber (2014): "A framework for robustness to ambiguity of higher-order beliefs," International Journal of Game Theory, 43(3): 525--550.  This isn't quite my field.

[3] In certain institutional settings, certain seemingly unquantifiable events may be very narrowly pinned down; I mostly have in mind events that are tied to options markets.  If a company has access to options markets on a particular event, it is likely that there is a probability above which not buying (more) options is a mistake, and another below which not selling (more) options is a mistake, and those probabilities may well be very close to each other.  If you think you should build a factory, and the options-implied probability suggests you shouldn't, buying the options instead might strictly dominate building the factory; if you think you shouldn't and the market thinks you should, your best plan might be to build the factory and sell the options.

liquidity and coordination

A kind of longish article from three years ago tells the story of a wire-basket maker adapting his company in response to foreign competition.  One of the responses is to serve clientele with specific, urgent needs:
"The German vendor had made this style of product for them for over 20 years," says Greenblatt, "and quoted them four months to make the new version." Marlin said it could do the job in four weeks. And it delivered. "If a car company doing a model-year changeover can get the assembly line going faster, the value of that extra three months of production is enormous," says Greenblatt. "The baskets are paid for in a couple hours."
I've described a "liquidity shock" as a sudden spike in a person's idiosyncratic short-term discount rate: a dollar today is suddenly a lot more valuable than a dollar a month or two from now. In this case, there's an incredibly steep discount rate for a real good: a basket in four weeks is a lot more valuable than a basket in four months.  Drilling in just a bit more, the origin of this is a problem of coordinating different elements of the production process: while it could have been anticipated a year earlier that some kind of basket would be needed, by the time the specifications are available, the other parts of the production plan are being implemented as well, and you need them all (with the same specifications) to come together as quickly as possible.  So here we have something of a liquidity shock created by something of a coordination problem (though neither of those words is being used exactly as I usually use them), combining two of my favorite phenomena.

Tuesday, June 14, 2016

market liquidity and price discovery

Two consequences associated by convention with market liquidity are a liquidity premium (the asset's price is higher than it would be if it didn't have a liquid market) and market efficiency (in many ways, but I have in mind at the moment price discovery — the price is more indicative of the correct "fundamental" price if the market is liquid than if it's not.  I've spent much of my life in the last couple of years formalizing the idea that the liquidity premium can be negative — if something is consumed regularly and can be stored, people may be willing to pay less if they trust their ability to buy it as they need it than if they worry that the market is unreliable — but it's worth noting a way in which market liquidity can also impede price discovery.

In [1], Camerer and Fehr note that a situation with strategic complementarity is more susceptible to irrational behavior than a situation with strategic substitutes; that is, if our action spaces can be ordered such that each of our best responses is higher the higher are others' actions, then I am likely to worry more about other people's actions than if it is lower the higher are others' actions.  For concreteness, consider a symmetric game in which actions are real numbers and my payoff is -(a-λ<a>)2, where <a> denotes the average of everyone's action and λ is some real number; for λ=1, any outcome in which everyone picks the same action is an equilibrium, and I care deeply what other people are doing, while for λ=0 my best response is 0 independent of what others are doing. For λ=0.9, the only equilibrium is one in which everyone chooses 0, but if I think there's a general bias toward positive numbers, I may be better off choosing a positive number — and thereby contributing to that bias. If λ<0, then if I expect a general bias, I'm better off counteracting that bias; even a relatively small number of agents who are somewhat perceptive to the biases of themselves and others will tend to move the group average close to its equilibrium value.

Now consider an asset with a secondary market; in general the value of an asset to a buyer is the value of holding it for the amout of time the buyer plans to hold it, plus the value of being able to sell it at the time and price at which the buyer expects to sell it.  In a highly liquid financial market, especially one in which a lot of the traders expect to hold their asset for a short period of time, an agent deciding whether or not to buy will base the willingness to pay very sensitively on the price at which the asset is expected to be sold some time later.  If the market becomes less liquid, it makes less sense to buy with the intention of holding for a very short period of time; the value of owning the asset is a larger fraction of the total value of buying it.  λ is still positive, but is much less close to 1; I still care what other people will pay for it when I sell, but at least as a relative matter the value it has to me as I hold it is rather more important.  More to the point, though, I expect the seller to whom I sell it to make a similar calculation; the price at which I am able to sell it will be more dependent on what I expect it to be worth to the next owner to own it, and less on what I think the next seller thinks the seller after that will pay for it.  The cycle in which we care more about 15th order beliefs than direct beliefs in fundamentals is more attenuated the harder the asset is to sell.

It's worth noting that James Tobin suggested a tax in the market for foreign exchange for reasons related to this.

Addendum: Scheinkman and Xiong (2003): "Overconfidence and Speculative Bubbles," Journal of Political Economy, 111(6): 1183--1219 seems to be relevant, too.

[1] Camerer and Fehr (2006): "When does ``Economic Man'' Dominate Social Behavior?," Science, 311: 47 – 52

Monday, June 6, 2016

money illusion and dipping into capital

I think that I use the term "money illusion" somewhat differently from how many writers use it, though I think my use is slightly more appropriate to the ordinary use of those words separately and is a more useful and coherent phenomenon.  In either case, the essential point is that a dollar five years from now is different from a dollar now, and that mistakes can be made by decision-makers who assume otherwise.  One of the ways in which this manifests itself is in a maxim against "dipping into capital", which holds that retirees, endowed non-profits, and those people from Jane Austen novels who "had" an income of so-many-pounds per year unconnected from any employment, should only spend the "income" derived from retirement savings / endowment / whereever that money came from, and never sell down the asset.  There are surely circumstances in which that's a good rule of thumb for boundedly rational agents to avoid worse mistakes, but it seems in part to suppose that one thereby has "the same amount" of capital later as one has initially.  In solving a Ramsey problem with a perfectly liquid instrument of savings, however, there is no distinction between "principal" and "interest", and if a risk-free asset pays an interest rate that varies with time, the optimal solution will typically involve selling some of the asset to increase consumption at certain times and buying more of it at other times to save for later; the exact result will depend on other details, but even if the dollar amount of savings tends to return to some constant dollar amount over long period of time, it is rarely optimal to keep it exactly constant all the time.

A related phenomenon is "reaching for yield": when investors, especially bond investors and often in denial, view the interest rates available on safe investments as insufficient, they may become more inclined to buy the bonds of riskier companies, which tend naturally to pay higher rates of interest, until, of course, they don't.[1]  While this is often done by investors who just seemingly can't really believe that interest rates are as low as they are, and feel entitled to the interest rates that prevailed in the Carter administration, sometimes the people who do this are people drawing a line between "capital" and "income", and looking to turn a higher portion of the expected return into a form that their rules of thumb will allow them to spend.  They would often, perhaps generally, be better off buying a bond with a 3% yield and selling off 1% of their holdings each quarter[2] than buying a bond with a 7% yield so that they don't have to sell it.

This post is loosely triggered by a badly flawed column at yesterday; a somewhat more coherent version of its argument, though, is that an environment of low interest rates encourages income investors to buy stocks with higher dividend yields[3] and thereby reduces investment as companies use cash to raise their dividends instead of spending it on research and development.[4]  One of my favored models of a liquidity shock, especially when thinking about things intuitively, is as a suddenly high private discount rate on cash; I suddenly need dollars very urgently, such that putting them off until next month or even perhaps next week is very costly to me in some sense, so that I'll take $1000 now instead of $1100 next week or $1300 next month; in particular, if I have an asset that I think is, in some longer-run sense, worth more than $1100, I might find liquidating it in a hurry at $1000 to be better than whatever consequences would befall me if I spend time trying to find a better price.  More generally, I've treated the ability to sell an asset as giving it some "just in case" value; it's more attractive than an otherwise similar illiquid asset because I don't know whether there will be a liquidity shock.  The ability to sell all at once if necessary, though, and the ability to sell a bit at a time according to a previously anticipated schedule, are likely at least to be closely related, if not, in presence of systemic events, to be exactly the same.

Let's incorporate the "don't dip into capital" mentality by supposing that, even if assets could be sold easily for cash, that the owner won't sell them; they are, if you will, practically illiquid because of the owner's behavioral biases, even though the market exists.  With the realistic incomplete markets, the marginal discount rate of a serviceable amount of cash for most people most of the time is likely to track general interest rates reasonably well; somewhere between what you get from relatively safe savings and the rate at which you could borrow if you needed to.[5]  If it weren't in that range, you'd presumably borrow or save more or less than you do.  If you're intent on putting most of your money into assets that you refuse to sell, though, the private discount rate and the general market rates have considerable room to diverge, and the conditional optimization problem requires that you discount future cash flows at your own, private discount rate — which, if the cash flows tend to be very long term, is likely to be somewhat high.[6]  If dividend yields are low, presumably because the markets think the values of stocks lie more in their payouts in the distant future than their payouts in the next year or two, then if most of your income is from stocks that you refuse to sell, your private valuation of a stock that you're considering buying (and, we suppose, holding forever) should be driven by expected dividends discounted at that private rate — thus valuing higher dividend yields at a higher fraction of their market value than low dividend yields.  The same would apply to bonds; this, then, may be a serviceable "reaching for yield" kind of model to use where we think we have agents of that sort we want to incorporate into our financial/economic model.

[1] Until 30 years ago, the extra interest rates that risky companies paid on their bonds actually did more than make up for the risk of default by a sizeable margin, and a diversified portfolio of such bonds that could absorb losses on a few bonds would more than make up for it in the extra income payments.  That gap generally tightened in the 1980s, and, while it has still been generally positive in the last 25 years, it's more of a strategy that generally gets a reasonable premium for a risk, not a reliable way to secure a high return.  That said, of course some high-yield bonds will in fact make all of their scheduled payments without default, but it shouldn't be forgotten that there's a reason they're trading with a higher yield to maturity than the safer portions of the bond market.

[2] I mean, this is false, and in an important way; corporate bonds are much more expensive to buy and sell than large-cap stocks, for example, and I'll note later in passing how illiquidity might justify the behavior to some degree.  The right approach is probably to buy the 3% bond, but not with all of your money; keep some of your money in shorter-term instruments that mature as you'll need the cash.

[3] Note that here the caveat in the previous footnote has much less force; you can sell down stock holdings somewhat gradually with relatively little in the way of transaction costs.

[4] A bit off topic, but a quick list of problems with the column: 1) it notes that companies are buying back their shares, which runs exactly counter to the idea that they're turning long-run share value into income; holders have to sell their shares to receive the payouts; 2) the corporate sector as a whole is holding a lot of cash on balance sheets right now, even after giving some of it to shareholders; a failure to engage in R&D is not plausibly due to a shortage of cash caused by shareholder payouts; 3) those low interest rates also enable most companies to borrow money cheaply, financing either payouts to shareholders or R&D or both that way, and it isn't reasonable to imagine that higher payouts to confused shareholders are anywhere near the scale needed to cancel out that effect.

[5] This range may be big compared to some things, especially in the short term, but on the scale of years will be at least reasonably well defined.

[6] E.g. if you "have an income" that is safe and expected to grow for decades, you might be motivated to smooth your consumption in time, spending more now and less later, rather than living on beans now and in luxury later, even if you had to pay a somewhat high interest rate in order to borrow to move that consumption sooner.

Thursday, June 2, 2016

prices and appraisals

I'm fond of commenting that assets don't have prices; transactions have prices, and offers to transact have prices, but the best one can hope for in assigning a "price" to an asset is to expect that one can reasonably predict approximately what it will trade for, or would trade for, under some almost true counterfactual.[1]

A judge in Delaware has declared that Dell shareholders whose shares were taken in a leveraged buyout a few years ago were underpaid, and it would be in the spirit of Levine's commentary, though he doesn't quite do so himself, to note that their voting against the buyout at $13.75 per share is prima facie evidence that they valued the shares at a price above that, though possibly for strategic rather than fundamental reasons.[2]  The judge apparently argued that the $13.75 was based on a correct determination of long-run prospects, but that the bidder's cost of equity is higher than the sellers', and he thus calculated a "value" for the shares based on the bidder's assessment of future value and an imputed cost of equity for the sellers.

Let's repeat again that nobody was bidding a higher price than $13.75.  The value of the shares to an individual will depend on that individual's risk-preferences, cost of equity (related to risk-preferences), and assessment of the prospects for the shares;[3] insofar as an effective market mechanism gets the shares to their highest-valued owner, the market price would be the highest value that any owner, given those criteria, places on the shares.  As Levine notes,
This buyout didn't create value by changing Dell's business model; it created value by changing Dell's ownership -- by moving the shares from people who mostly didn't value them that highly (public markets) to people who did (private-equity buyers).
Some potential buyers may have a lower cost of equity (and thus would put a higher value on it), but a lower assessment of the prospects, or a lower ability to handle some idiosyncratic risks associated with them; others might have higher assessments or better risk-profiles to take on the risks but a higher cost of equity.  The judge seems to have hypothesized a non-existent bidder with the combined attributes that would maximize the value, without regard for the fact that the bidder is, in fact, non-existent.

I'm not sure that some kind of judicial overruling of a take-over price is never warranted, but it seems to me that procedural protections are on much firmer philosophical ground; the fact that there were a couple of apparently independent bids, and that the winning bidder here seems to have been assiduously fair in the process of securing the votes of the majority of shareholders, should in this case have been dispositive.  Especially insofar as the buyer's control affects its prospects,[4] "you have to pay what someone who doesn't exist would pay to buy a company you control" just doesn't make any sense.

[1] In particular, there's a legal dictum that "a fair price is what a willing buyer would pay a willing seller", which is sensible insofar as it has content, but that's not very insofar; the dictum is typically applied where there isn't both a willing buyer and a willing seller, at least not at the same price, and the price at which a hypothetical willing buyer and willing seller would trade depends entirely on why they are willing to buy and sell. The dictum might, therefore, be occasionally useful in framing analysis, but it never really contributes very much toward determining what a "fair price" ought to be.

[2] This is the classic hold-up problem, which eminent domain and buy-out procedures are supposed to mitigate; it's worth noting, perhaps here, that this judicial appraisal procedure itself is among the counter-protections in the buy-out process.

[3] It perhaps gets too confusing to note here that the "prospects for the shares" in fact consisted of their being bought by Dell at a price that would be overruled three years later by a Delaware judge.

[4] Though this appears in at least some sense not to have been a big part of this deal, the arguments about short-termism and R&D belie that at least somewhat.

Wednesday, June 1, 2016

financing illiquid assets

I remember, many many years ago, being surprised to learn that the credit rating of a mortgage borrower was as important to underwriters as it was;[1] the loan, after all, is secured by an asset worth 25% more than the loan,[2] and it seemed to me that the focus would be on verifying the value of the asset, conditional on which it wouldn't really matter all that much if the borrower defaulted.  The problem is, even in what I'm going to date myself by calling "normal times", seizing collateral and selling it is a real nuisance, and is not what banks specialize in; they welcome the backstop, sure, but not least because it strengthens the borrower's interest in repaying the loan.  The bank really just wants a borrower who's going to repay the debt.

Matt Levine recently commented that the very essence of collateralized lending is lending against illiquid assets — viz. assets that would be annoying to repossess and sell; for liquid assets, the owner can raise funds by selling the asset.  This comment is more surprising to me than it is wrong, which is not to say that it isn't at least frequently somewhat wrong, starting with the housing market, where the whole point of your garden-variety mortgage is that you want to own the asset, and not that you have some other need for funds for which you might prefer to temporarily liquidate it.  What he had in mind is the banking industry; both banks with abstract assets and other large companies with physical capital frequently use somewhat illiquid assets as collateral where, in an idealized world of perfect markets and divisible assets they might sell off a third of a factory and buy it back when they no longer needed the cash.[3]  Some of the models I've played with emphasize the extent to which an asset's value increases because it can be sold — because of its market liquidity — but notwithstanding some literature on the extent to which assets gain value because they can be used as collateral,[4] I maybe haven't appreciated enough the extent to which the ability to sell an object and the ability to use it as collateral can substitute for each other.

[1] This is before it wasn't, before it now is again.

[2] Again, back when people put down 20%, at least for the mortgage loans under discussion here.

[3] In fact, the most frequent use of "collateralized borrowing" in finance actually tends to use very liquid collateral, but are not in fact legally "collateralized borrowing" at all; they are the sale of an asset and simultaneous agreement to buy it back shortly thereafter.

[4] The clear example is Kiyotaki and Moore (1997): "Credit Cycles," Journal of Political Economy, 105(2): 211--48, though Geanakoplos has a whole oeuvre that hovers closely around this, and Gary Gorton is probably worth mentioning, too.

Tuesday, May 31, 2016

who supplies liquidity

A BIS empirical working paper on market liquidity provision includes in its abstract the following finding:
We find that proprietary traders, be they fast or slow, provide liquidity with contrarian marketable orders, thus helping the market absorb shocks, even during a crisis, and they earn profits while doing so.
A "marketable" order is one that "takes" liquidity.  So what does this mean?

"Liquidity", even if we've narrowed the topic to "market liquidity", is not at all a single thing.  An investor looking to buy or sell an asset for a few months, especially in a large quantity, can thereby push the market up or down to some degree; the proprietary traders to which they refer presumably are skilled at recognizing the signatures of those distortions, and will tend to enter the market on the other side, such that over the course of a day the net purchase of the proprietary traders and the longer-term investor combined is closer to zero.  The proprietary traders may well exit their positions a week later; they are specialized in more granular timing than the longer-term traders are.  They, however, enter and exit their positions by trading directly with even shorter-term traders, who will typically close their own positions the same day that they take them, and often the same hour.  Many sorts of traders, thus, effectively provide liquidity to those trading on larger time scales than themselves, taking it from those trading on shorter time scales than themselves.

Thursday, May 26, 2016

liquidity and power law behavior

Because I just can't help myself I was reading about self-organized criticality the other day,  That subject is not particularly precisely defined, but there's a nexus of models in which events emergently take place on different scales with a frequency that obeys some power law with respect to the scale.  People certainly find power laws in economics, and I've thought of reasons an economic system might be naturally driven toward critical points, but aside from a brief and particularly fuzzy attempt 25 years ago, there hasn't been much SOC in the economics literature.

I also, you may have noted, have an abiding interest in "liquidity", in its various guises.  One of the key characteristics of liquidity is its heterogeneity; "liquidity shocks", for example, can be idiosyncratic or systemic — or, possibly, somewhere in between.  If I suffer an idiosyncratic liquidity shock, I can sell some assets to other people to raise cash for my sudden needs.  If everyone suffers a liquidity shock at the same time, then nobody else particularly wants to buy; at best, I'm selling at much lower prices to people whose sudden liquidity needs aren't quite as dire as mine.  I haven't seen literature that mentions in-between events, but it would seem reasonable to me that both the contagion effects of liquidity shocks and the triggers of liquidity shocks would have some of the local-interactions-with-threshold-effects characteristics that SOC models tend to have, and would exhibit some of the power law behavior that tends to result.

Friday, May 13, 2016

liquidity, wealth, and consumer behavior

The Atlantic has a contribution to the "it's expensive to be poor" genre of article, and one of the things worth noting about this and most other examples is that the benefit here is generally to liquidity rather than net wealth; I think, though am not sure, that the price-discrimination related reasons that it can be more expensive to be wealthy than poor would track more with net wealth than with liquidity; they would probably track consumption more closely than either.

Thursday, April 28, 2016

models and language

All models are wrong, but some models are useful; the purpose of a mathematical model is to simplify a complicated phenomenon so that it can be more easily and thoroughly understood, and a good model should therefore be simple enough to be readily understood and complex enough to display the phenomena of interest, making it reasonably clear what the origins of those phenomena are. The two kinds of mistakes that can be made from forgetting that All models are wrong, but some models are useful are
  • using a model outside of its domain of applicability — forgetting that is it wrong
  • criticizing a model solely for being wrong, without specifying that it substantially fails to serve the purpose to which it's being put.
For both reasons, it is typically useful for a model to be clear about its domain of applicability; it should be clear where the approximations are going to break down.

Humans are model-builders; while we use a lot of simple behavioral models that other organisms use, we are better than[1] other organisms at managing hypotheticals, especially hypotheticals that are fairly well outside of our direct experience, in part because we tend to carry around deeper models of the world with which we interact.  This comes through in our uniquely[2] human language; perhaps the barest is the use of labels, hierarchical categories ("animal" includes "dog" includes "that kind of little dog that looks like a mop with its handle missing"), and abstraction ("three apples plus two apples equals five apples" and "three oranges plus two oranges equals five oranges"; for that purpose, number can be abstracted away from the thing being counted), but it's perhaps on better display in the use of metaphorical language.  What's key about a metaphor is not that the thing being compared to the other thing is identical to it, merely that it shares certain characteristics that are relevant to some purpose; All metaphors are wrong, but some metaphors are useful, and they're perhaps more useful when it's clear in what ways they are useful and in what ways they are wrong. This is true, also, of categorical systems; if we classify movies, for example, into action movies, comedies, etc., we may sometimes find a movie that seems to sit on the edge of one category or another, or that clearly doesn't fit any of the categories we had before, or clearly fits into multiple categories that we might have thought of as largely disjoint from each other. Typically the category system will be of some use as long as the exceptions aren't too common, and as long as the use to which it's being put isn't too brittle when an exception comes along.

There are two somewhat more concrete examples I want to end with.  The less concrete is that an analogy between X and Y will sometimes be met with "You can't compare X to Y," typically in a situation in which X and Y are in fact very similar in the relevant way but different in an obvious but irrelevant way.  I'm certainly careful not to use "Nazis" as "Y" because I expect to trigger this fallacy, but even there it comes off to me as a sign that the respondent either isn't paying much attention or is more interested in some kind of point-scoring debating game than in furthering a serious discussion.  The more concrete example I want to give has to do with religion; in particular, the terms Muslim and Christian.  There's a certain politeness in taking at face value a person's own label[3] for his or her own religious beliefs, and that certainly seems like as good a way to handle edge cases as any, but it also seems to me that the labels become uselessly circular[4] if the term Muslim means "person who considers him/herself to be a Muslim".  Asking whether ISIS is "Islamic" is ultimately a semantic question; asking whether it is more useful to have a term for most Muslims that includes ISIS or one that excludes ISIS is at least somewhat clearer when it's clear what the relevant "use" is.  In any case, if we do call ISIS "Islamic" that certainly doesn't mean that their beliefs or practices are exactly those of other Muslims (and, accordingly, the fact that those beliefs and practices aren't exactly the same doesn't by itself mean we shouldn't call them Islamic).  I similarly see the occasional assertions that, because someone has identified him or herself as Christian, it is mandatory that they adopt a particular vision of Christianity, or it is "hypocritical" if they don't.  Christianity is not so narrow a category that the use of that label implies precisely a set of moral beliefs, but it is occasionally a broadly useful label nonetheless.

[1] almost?  I'm not up on animal cognition research.

[2] again, I think uniquely; perhaps not quite, though certainly I mean something more by "language" than would admit simple alarm calls etc.; "language" in my mind requires an ability to express at least some degree of abstraction.  In fact, if you're not slightly critical of my claim that language comprises model-building on the grounds that it's at least very nearly tautological, then I'm not being clear about what I mean by those words.

[3] There's a big issue, too, of our not just using labels but, for the purposes of language, having to use shared labels, i.e. we need to be using labels in approximately the same way, or at least to be able largely to understand the labels each of us is using. For the time being, I'm relegating that to this footnote.

[4] in principle. In practice, a person who doesn't fall at least close to the usual category is unlikely to claim to belong to it, which is probably at least part of why we so often fall back on it.

Tuesday, April 19, 2016

terms of trade

Suppose I have (at least) 3 oranges and you have (at least) 5 apples, and I'd consider trading three of my oranges for four apples to be a net improvement to me, and you'd consider trading five of your apples for three oranges to be a net improvement to you.  We have a situation, then, where we ought to be able to find a mutually beneficial trade, but there is a real danger that we get hung up on that fifth apple; even if our relative values are common knowledge — so that, among other things, we both know that we ought to do some kind of trade — it's quite possible we end up unable to agree to a deal, if we both dig in our heels, I insisting that you give me five apples for my oranges, and you insisting that I accept only four.

This looks like an economic model, and it is that, but it's also a model of politics, and while I think people have a tendency every four years to think that they're in the ugliest presidential race ever more from poor historical perspective than a monotonic decline in the state of American politics, it really does feel as though the "digging in our heels" bit has become worse in the last several years, especially (but not exclusively) on the right.  I also think, though, that there's less common knowledge of values than people often believe, which creates more problems, the way that my insisting on six apples would in the opening parable.

I don't think it's inconsistent for me to add, though, that I also feel as though a common and vapid form of political discourse involves essentially decrying any attempt by the other side to negotiate terms of trade; in particular, when controversial riders get attached to popular legislation, especially but not only when those riders are topical, that seems entirely fair[1] as part of negotiating a deal.  I've noted elsewhere on this blog how I might change procedures such that these gambits would be less capable of blocking consensus bills, but certainly given the rules we have, something well between unilateral disarmament and complete obstinacy should be achievable.

[1]in and of itself.  Demanding huge concessions for passing a popular or urgent bill is antisocial, while adding something that one side mildly dislikes is very different.

Tuesday, April 5, 2016

homeowners associations

One of the striking ways in which my views have changed since (say) I turned 30 is that I'm a lot more pessimistic about local government in practice than I used to be.  The principle of subsidiarity still has great theoretical appeal to me, however, and I've had some ideas in the past several ideas, many of which have other features I found unattractive when I was in my twenties as well, to try to mitigate the problems that local governments often face.

While not exactly a local "government", homeowners associations are of particular interest in this context in that there are certain sorts of problems to which they seem like obvious and even necessary solutions, and yet I think they often highlight the worst of local government.  A large part of the problem is sort of an averse selection or "attractive nuisance" feature they have, which is tied to the fact that the knowledge that someone is interested in serving on the homeowners' association tends to imply that that is not a person you would want to serve on the homeowners' association; they may attract some people who have a sense of duty that is not entirely misplaced, but they also draw anyone with an inclination toward officiousness, a certain kind of status-seeking, or peculiar axes to grind.  The equilibrium here is that they be held in check by the constraint that normal civic-minded people find the prospect of getting elected, attending the meetings, and providing whatever other service is entailed slightly more obnoxious than putting up with the current board, which thereby consists primarily of people with at least slightly antisocial motivations.

One solution is something akin to Athenian democracy: part of your "homeowners association dues" is the obligation to occasionally serve on the board, which consists of a somewhat random sample of homeowners, which, as I noted, is likely to result in a majority that is at least less pathological than the group that would volunteer.  I don't hate that solution, but I have in mind another set of solutions, driven by the same idea that what one needs is a system for attracting candidates for office who are motivated more strongly by something other than telling their neighbors what to do.  What's particularly interesting is how squarely these three solutions fly in the face of the sort of thing that various progressive (in the best-preserved century-old sense of the term) and "good government" forces would tend to put forward:
  1. Allow, encourage, and maybe require that some of the board members come from outside the community;
  2. Circumscribe the job such that actually performing it is as unburdensome as is possible while getting the actually needed tasks handled
  3. Provide a salary for the job at a level that is at least on the brink of ridiculously generous.
The latter two points should be clear in light of the preceding commentary; money may not be the motivation you would have as your first choice, but it's likely to be less malign than many intrinsic motivations, and, provided that it's sufficient to draw enough candidates to have a competitive election, is ultimately not insuperable. These provisions make the job more attractive to normal people, and hopefully enough normal people that the winning candidates are somewhat representative of the people electing them, rather than the people who were motivated enough to volunteer. The first provision is not there with the expectation that the average person from outside the community is a better candidate than the average person in the community, but simply the expectation that there are more of them; inducing forces to draw candidates from outside of petty neighborly disputes is part of it, but this provision, too, is more concerned with drawing enough candidates to get a competitive race from which the homeowners can meaningfully elect a proper subset.

So there's my advice: turn it into a well-paid sinecure open to outsiders in order that it best serve the electorate.

Saturday, March 26, 2016

regressions on selected populations

Suppose there are two different sets of factors that affect some kind of outcome, and each moves toward optimizing the outcome, but in a noisy way.  I'm imagining two gradual processes, each affecting a subset of factors that affect the outcome, drifting generally toward values of those factors that increase some real-number-valued function of the factors; the function itself may move, somewhat gradually and smoothly, requiring the processes to chase it.  We might imagine them as multi-dimensional factors, but the language might be easier and more concrete if we imagine that each of them is simply a real number as well; x and y follow stochastic processes that drift in a direction of higher f(x,y).

Suppose, though, that x tends to move more quickly than y; it will, at any given time, tend to be closer to its optimal value.  Imagine, now, an ensemble of these, each trying to optimize the same function but otherwise moving independently of each other.  Cross-sectional differences in y will tend to capture differences in the extent to which the y values have adapted to the most recent function; if the optimal value of y has been increasing, there will be a pronounced tendency for higher values of y in the ensemble to be closer to the optimal value of y, with perhaps some overshooting, but, if y adjusts slowly compared to the rate at which its optimal value changes, probably relatively few points that have too high a value of y.  Values of x, on the other hand, will be scattered more or less randomly around the optimal value of x; if the noise now exceeds the residual mismatch between the average value of x and its current optimum, then the highest values of f will be associated with values of x near the middle of the distribution, rather than near one of its ends.

This is all assuming more or less the right ratios — that adjustment of x isn't too slow compared to changes in its optimum, that adjustment in y is fairly slow compared to changes in its optimum, and that in some practical sense they are each intrinsically of similar importance to f(x,y) relative to the noise in each variable.  The upshot, though, is that if we do a standard linear regression on how f varies with x and y within the observed cross-section, we find that x has little to no linear effect, while y does have one; even if we add nonlinear terms to the regression, x contributes only second-order terms, while y has both first- and second-order terms, and in general if you can pick to know only either x or y from a datapoint sampled from the population, knowing only y will enable you to predict f(x,y) with much more accuracy than knowing only x would.

There's literature suggesting that parenting is less important to a child's outcomes than genes are.  This is, however, conditional on most parents' trying to be decent parents; it is certainly the case that environment can matter, as we see with the victims of Romanian orphanages and other examples of violence and neglect.  It seems likely to me that parenting isn't, in some sense, "less important" than genes; it's just that most people have mostly done a decent job of adjusting their parenting to modern times, such that observed variations tend to have second-order effects, while our genes haven't caught up in the same way, and that differences there tend to be first-order.

Thursday, March 24, 2016


Three and a half years ago, I was looking into the idea of doing research on people's desire for privacy.  I kind of gave up, though others have not, and perhaps I'll go back in that direction some day.  Today's post is triggered by recent news from my school that a student has a strain of meningitis, which information the school may be required to make public, though of course other regulations require that they not announce which student.  This seems like a useful example in which to lay out some spare thoughts I've had over the years.

The essential point is that there would be some public health justification for releasing the name of the building in which the student lives, the classes the student attends, possibly even student organizations in which the student has been active.  If you release all of those, it seems likely that the student would be uniquely identified.  Privacy is, at first order, less of a concern if you release any single one of those pieces of information; what I mostly want to note here is that, however you might try to place value on the student's privacy, one of the primary costs of releasing one of those pieces of information is that you thereby make it more expensive to release one of the others (and, similarly, more costly if one of them comes out by accident).  While there is perhaps some extent to which different information can be released to different people, insofar as the information is likely to spread, it seems likely that their decision not to release any of those extra bits of identifying information is probably the right cost-benefit decision, regardless of what legal liability rules HIPPA might impose as well.

Friday, March 4, 2016


I'm making less of an effort than usual with this post to be interesting to people who aren't me.  I'm just formalizing the idea of "unravelling" of certain employment markets, in particular markets where there's some sense in which it's best for all firms and workers to be making offers that are in some sense simultaneous, but where individual agents might have an incentive to cheat, making deals before the thick market; in particular, under what conditions might a firm make a take-it-or-leave-it offer to a prospective employee before the market rather than waiting for the market?  In particular, it should be clear that, if the offer were really enticing, the employee would have accepted it in the regular market, and if it's not enticing enough, the employee won't accept it anyway.

Suppose there are two prospective firms hiring an employee, and let's examine the consequences of their making offers sequentially or simultaneously (in the sense of game theory).  Ex ante, the employee attaches probabilities p1 and p2 of getting offers from them, and gets utility u1 or u2 from accepting an offer. (The value of unemployment is set to 0, and may be assumed lower than either ui.)  If firm 1 makes an offer that must be accepted or declined before receiving a response from firm 2, then declining the offer is worth p2u2; conditional on firm 1's offer, the expected payoff is
max{u1,p2u2} or p2u2.
If the offers are simultaneous, then those conditional values are
max{u1, u1(1-p2)+p2u2} or  p2u2.
If the first firm fails to make an offer, the structure of the game is immaterial. Similarly, if the first firm is the worker's first choice, the structure is immaterial. If the worker gets an offer from the first firm but u2>u1, then the simultaneous game payoff, which can be written as u1+max{0, p2(u2-u1)}, is u1+p2(u2-u1)=p2u2+u1(1-p2); the simultaneous game payoff in these circumstances is higher by min{p2(u2-u1),u1(1-p2)}.

The first firm gets the hire in the sequential game whenever it does in the simultaneous game, but also when
  • It makes an offer
  • firm 2 makes or would have made an offer
  • u2 > u1 > p2u2
where the last condition is the condition in which the worker would choose firm 2 over firm 1 if given both choices but will choose firm 1 rather than take the chance that the other offer isn't coming.  This condition essentially captures the intuition of the last sentence of the first paragraph; by exploiting the worker's uncertainty as to whether a better offer would be forthcoming, the employer can induce the worker to take the "bird in the hand", but as the probability of another offer gets close to 1, there's an increasingly narrow range of offers the worker would accept now but not later.

the value of money

Let's continue on the idea of eggs as a medium of exchange.  In particular, while they make a decent store of value for a short period of time, they're terrible at storing value for very long; even if you boil them, they won't be a way to stockpile wealth on the scale of a generation or something.  They hold their value well enough, though, that one could reasonably use them to facilitate a couple of trades before they end up with their final consumer; eventually, though, their value is that of a straight consumable.

Now consider an agricultural economy with limited currency of other sorts such that eggs pick up at least some of the slack.  The consumer of the eggs presumably sees some gain from trade in the final transaction; that person values the eggs at least as highly as what is being exchanged for them.  The penultimate owner presumably values what is being received from the final owner more highly.  Now consider the trade between the antepenultimate owner and the penultimate owner of the eggs.  We suppose
  • the transaction couldn't have happened had the eggs not been available as a medium of exchange
  • there was a substantial gain from trade in the eyes of both parties
  • if it is common knowledge that the eggs are deteriorating, the terms of trade should reflect this.
Young eggs, in particular, with three or four transactions left in them, might be worth more than eggs with only one or two transactions left in them — "eggs" may not be a unit of account (by which I really mean "standard of value"), exactly, even though they're a medium of exchange. At all both points, though, the eggs can acquire a liquidity premium in excess of the value that any consumer puts on them, essentially incorporating some of the gains from the trades they will facilitate into their initial value.

Now let's go back to money that is expected to last, in some practical sense, forever.  The social value of a dollar is the value of the transactions it can facilitate; a dollar that is return-dominated is, in some neutral unit of account, depreciating, but as long as the present discounted value of the gains from trade of the trades it will mediate is at least (say) a few dollars, the dollar can kind of steal that.  The more slowly the dollar deteriorates, the more marginal trades it can intermediate; a dollar that depreciates quickly will only be accepted as payment if the gains from trade are large.

Tuesday, March 1, 2016

barter and money

One of the classic virtues of money, as related by Jevons, is the reduction of the "double coincidence of wants" problem; I have noted, in this regard, that good markets attempt to mitigate the remaining "single coincidence of wants" problem.

Today is my grandmother's 90th birthday, and I was talking to her on the phone this evening when conversation turned to the low level functioning of the economy of Iowa in the 1930's.  One of the things she noted was that barter was more common in the rural areas than in town, in part because — very much not her words — a lot of farm products are sufficiently widely of at least some value that the double coincidence problem doesn't have an insuperable amount of bite; chicken eggs might not be your top choice of good to receive in exchange for what you're selling (likely labor), but it's likely to be of some positive use value to you, and, even if it isn't, it is similarly likely to be of even higher use value to someone from whom you would like to buy something, so that you might well be willing to accept it in trade anyway.

Monday, February 1, 2016

liquidity, solvency, and Ponzi schemes

I think I've noted here that fractional-reserve banking will fit most natural definitions of "Ponzi scheme" — "We have 125 $100 deposits, of which 26 or more will certainly be withdrawn in the next 10 years, but we're going to make a 10-year loan for $10,000, and it's okay because we'll get money from new depositors with which to pay the old depositors." Sure. — and it occurs to me that there's some smaller degree of this in a lot of other contexts. The working cash a business keeps on hand is typically not a huge multiple of the rate at which it acquires new supplies, and often a business will count on being able to collect on some of its receivables in order to pay some of its payables; simply sitting on enough cash to pay all "current" liabilities is bad practice, at least when it earns a much lower interest rate than the company's cost of capital, and yet there is something of a Ponzi feel even to that.

I might note right now that many classic Ponzi schemes are "actual" fraud, in that the person promoting it knows that there is no value being generated within the scheme and that the "investors" have no legitimate likelihood, at least on average, of making money; for more sophisticated designs (or dumber promoters), the promoter may actually believe that there is a social benefit to it, and where there is some actual economic activity, such as in "multi-level marketing" schemes in which the participants do actually do some sales but make most of their money (if any) from recruiting new marketers who pay a "franchise fee" to join the scheme, it can be quite hard to tell what legitimate expectations the class of marketers might have.  With much fraud, stupidity is a defense — by which I mean that as long as you sincerely believe the claims you're making, you aren't guilty of fraud.[1]  Ponzi schemes, however, are constructively fraudulent; if you're found to have been promoting an illegal Ponzi scheme, it doesn't matter whether you believed in it.  The multi-level marketing schemes in particular are fairly controversial, with some courts deciding some schemes are illegal pyramid schemes and other courts deciding other schemes are not.  None of the courts particularly investigates, in any case, whether the promoters are particularly competent at basic arithmetic.

A lot of the Ponzi-like set-ups that are clearly on the good side of the law, though, have in common that the Ponzi character is one of liquidity-mismatch rather than a solvency issue; there is some public consensus that these sorts of practices are being used by entities that almost always have assets on hand that, in some intermediate or long-term sense, are worth more than the liabilities.  If the stream of new cash were to come to a sudden stop and the firm unable to pay cash on a timely basis to claimants to whom it was due, it would not be because the firm was "insolvent", but because it was unable to quickly or efficiently sell assets.  Distinguishing between "illiquidity" and "insolvency" is a famously fraught problem, especially for financial or other firms with a lot of short-term liabilities;[2] saying that my Ponzi problem is perhaps answerable in terms of solvency versus liquidity is not, in practice, a solution to the problem, but a hopefully enlightening observation that it's related to another not-entirely-solved problem.

[1]New Jersey consumer fraud laws actually do not have this provision; even perfectly good faith inaccuracies leave you liable not only for compensation but for tripled damages. It's also worth noting that, even if the law to which you're subject does have such a provision, if what you said was ignorant enough, a jury may not believe you.

[2]GE, somewhat famously, spent many years leading up to The Crisis borrowing a lot of short-term debt, rolling it over when it came due, while building large capital equipment that took a long time to build and sell.  As the markets began to unravel GE Capital was actually providing about 40% of the company's profits, but even the industrial portion of GE was operating a lot like a bank in some ways, and was subject to the same difficulties when it became hard to borrow money.

Monday, January 11, 2016

thoughts on capital

While I can't quickly locate it, I'm sure I've written about GDP and related measures in the past; in particular, while the calculation of GDP subtracts out the costs of most inputs into final goods (so, for example, when cloth is produced, and a suit is produced from that cloth, the value of the cloth produced is not double-counted), it does not subtract out the capital that is used.  This isn't such a big deal when the capital being used is a sewing machine, which can produce suits worth many times the cost of the machine before it wears out, but is a bigger deal when much of the capital being used is computers and software that become obsolete within five years — and, for purposes of measuring growth, is an even bigger deal when the economy is switching to a large degree from the former to the latter.  From the beginning of 1980 to the beginning of 2010, the growth in GDP exceeded the growth in Net Domestic Product — i.e. GDP minus capital depreciation — by an average of .25 percentage points on an annualized basis for three decades, as the amount of depreciation as a fraction of GDP doubled.

There's an introductory microeconomics model in which the amount that a firm produces is a function of the quantity of labor used and the quantity of capital used, and a profit-maximizing firm will hire more workers as long as the "marginal product" — how much more can be produced with an additional worker — exceeds the cost of employing that extra worker.  Similarly, capital is employed at a rate such that the marginal product of capital is equal to its cost.  The cost of capital is typically denoted by the letter r, which in other contexts is used for the real interest rate, and sometimes sources will go so far as to call it the interest rate — but this is wrong.  It will include financing costs, indeed, but will also include depreciation; if I start the year with a $10,000 piece of equipment, and end the year with an $8,000 piece of equipment, using the capital has cost me both the financing cost (say $500) of the $10,000 [1] and the extra $2,000; in particular, if one imagined that there was a perfect market for buying and selling partially depreciated equipment, I could literally just buy it at the beginning of the year, sell it at the end, and it would be quite clear that I need to make an extra $2,500 for having used that equipment in order to justify its temporary ownership.[2]

Again, in these models, the production function takes capital and labor as its arguments; one occasionally sees land or natural resources added as an argument, but one rarely sees "intermediate goods" or the like — cloth, for example, for the tailor.  Obviously there is some sense in which that's a major oversight; a tailor couldn't double the production of suits with the same initial allocation of cloth.  "Production", here, is taken to mean something like "the value of production minus the cost of the materials" — a sort of value added by the labor and the capital to the other inputs to get the outputs.  Adding this up for all economic units, whether they are producing capital goods, final consumption goods, or intermediate goods, gets us to GDP; if you were either to exclude production of capital goods, or were to subtract out depreciation, you would get, at least in a long-run average, the Net Domestic Product.  As it stands, though, GDP is the better publicized figure, even though it essentially double-counts the capital used in production of goods.[3]

In these models one often sees the expression rK called the "capital share of income" or some such; similarly, wL is the "labor share of income", where w is for "wage" and L is for "labor".  It was noted in the early middle twentieth century that the income share of labor was uncannily stable[4] over time, and Cobb and Douglass wrote down an economy-wide aggregate production function now called the Cobb-Douglass production function that more or less explains that.  What that function would predict, as we move toward capital that depreciates more quickly, is that r would increase, the amount of capital per unit labor being used would decrease, and wL would continue to constitute the same fraction of GDP as before.  Insofar as can be discerned from the data, this is actually not what has happened; while it's still hard to definitively declare a break in the trend, wL seems to have started to decrease somewhat as a fraction of GDP — but maintained a fairly stable portion for NDP.  In the standard simplistic sense, workers are getting the same fraction of net production as they were in the twentieth century, but a smaller fraction of the "production" that the usual "production functions" are measuring.

I will also note here, though it's something of an undeveloped tangent, that rK is often viewed as though K should have units of dollars and r should have the units of an interest rate; I've adopted that above, mostly because it's standard.  Much of the model goes through unchanged, though, if we interpret K as the real units of capital (assumed, at least initially, to be implausibly homogeneous) and r as either units of output per unit time per unit capital, or even as dollars per unit time per unit capital if we incorporate the price of the output (but not the capital) into it.  This approach seems more useful to me as we attempt to improve the ability of our models to link financial developments with the real economy; in particular, the price of capital — not r, but the actual dollar cost of capital equipment — may change relative to the price of the firm's output, and it's going to be hard to treat the effect of that on the firm's behavior if you've bound it up implicitly with the quantity of capital.

[1]That is to say, the interest paid on a $10,000 loan to buy it, the return that would otherwise have been earned on $10,000 that was used to buy it, or some combination of these, perhaps with an adjustment for risk.

[2]I should perhaps let the complications go to a greater extent than I am, but will note that (1) accounting depreciation is an estimate of true economic depreciation, which, in the absence of perfect markets, may not be easy to make precise, especially over short periods of time, and (2) in the real world, where there aren't such markets, it will still work out; if and only if the use of the machine is worth its cost over the period of its ownership and use, there will be some way to attribute its depreciation over time such that its implied value started at its purchase price, ended at its disposal price (if any), and provided capital services equal to the capital costs along the way.  Depreciation will end up being a real cost, and often an important one, for any asset that decreases in value over time as it gets used.

[3]Surely part of the reason is that it is simpler.  Consider another wrinkle: your personal car.  It may, in fact, be a piece of equipment that is worth $10,000 at the beginning of one year and $8,000 at the beginning of the next; perhaps, in addition to gas, maintenance, etc., owning the car for that year in some sense costs $2,500.  We typically consider the car to be a "final good", but on some level the final good is your use of the car — a distinction that could in some sense be made of apples as well, but is less useful for goods that get purchased and used up quickly.  A careful accounting would perhaps include the $2,500 in "imputed rent" that you paid to use the car — and here I'll note briefly that some countries charge homeowners taxes on the "imputed rent" associated with the use of their own houses — but would then also subtract out the depreciation of your car.  It's easier and basically equivalent to count cars as being "consumed" when they are purchased rather than as they depreciate.  What we want to subtract out are the capital costs associated with the production of something else that is purchased later — the capital cost is part of its cost of production.  If you use your car for business, for example, it becomes capital used for the production of something else.  How much do you use it for personal purposes and how much for business?  If you have a car that costs $12,500 per year, and could be just as productive with one that costs $2,500, isn't $10,000 of that basically just your consumption?  Economics is simpler in theory than in practice sometimes; you can start to get a feel for why the corporate tax code is so complicated.

[4]It bounces up and down a bit, which becomes important later in the paragraph, but generally returns to its long-run average within several years.