Friday, June 2, 2017

liquidity, ETFs, and price indices

One of the big topics I see running through my current and planned research is the idea of prices as essentially emergent phenomena that have a less precise existence in a lot of contexts than we often pretend.  Transactions have precise prices; offers to transact have precise prices.  Assets with centralized, fairly liquid markets have fairly precise prices; a firm offer to sell at a particular price more or less refutes any price higher than it, and a firm offer to buy (a bid) at a particular price more or less refutes any lower price, so two such offers close to each other, especially with large sizes, pretty well pin down the market price of even a moderately sized trade.  Smaller cap stocks often have only loosely defined prices.  Real estate generally has something an appraiser pulled out of his

Corporate bonds don't have liquid markets, or centralized ones (even in the sense that stocks do under Reg NMS).  Corporate bonds are an important asset class, and there are price indices that attempt to give a sense of whether the prices are going up or down and by how much.  There are various ways of doing this, and they aren't uniformly terrible; many of these indices are in some reasonable sense more or less correct, to about the degree that such a thing can be correct.  There are some contexts in which they run into the problem that they're weighted sums of numbers that don't actually exist, though.

Exchange-traded funds (ETFs) are more or less, as the name has it, exchange-traded shares of mutual funds.  Unlike closed-ended funds, the number of shares is variable, and essentially determined by the market; unlike open-ended funds, most of the activity consists of buyers buying existing shares from sellers.  Unlike either, at least in their most common configuration, their holdings are publicly known and basically static; a share of the fund may represent .08 shares of one stock, .03 shares of another, and so on.  The ETF has a sponsor, and a redemption size; if the redemption size is 75,000 shares, and you go to the sponsor with 75,000 shares of the fund, the sponsor will exchange them for 6,000 shares of the first stock, 2,250 of the second, and so on; conversely, if you take those underlying stocks to the sponsor you can receive 75,000 shares of the fund in their place.  Even if all the stocks involved have very liquid markets, you can occasionally have a small difference between the price at which the ETF is trading and the weighted sums of the prices of the underlying stocks, but if the difference gets at all appreciable big institutional investors will start buying whichever is cheap and selling whichever is expensive and the sponsor will end up creating or redeeming shares.  Because everyone knows this is likely to happen, it doesn't need to happen very often; if traders believe that a big purchase or sale of ETF shares is a short-term event, they may sell or buy, partially hedging with a subset of the underlying basket, in the expectation that the prices will return to parity.  (If you google something like "stabilizing speculation gold points" you can probably find a description of how this worked under the gold standard when the "sponsor" was a central bank and gold was being exchanged for currency.)

Once in a while ETFs need to change their holdings; an ETF that tracks the S&P 500 stock index adjusts its holdings when the constituents of the index change.  In the case of bond funds in particular, bonds will mature every now and then, and rather than just pay out the cash some new issue is typically added to the fund and the fund continues its existence.  There are bond ETFs, though, and they work in ways that are mostly unsurprising.  In the case of corporate bond ETFs in particular, though, the underlying securities often lack liquid markets, and often only really trade once or twice a week; the ETFs are often a lot more liquid than many of the underlying issues, which is big part of the value that the ETFs add.

Curiously, though, there are some people who are troubled by this, particularly where the ETF is attempting to track a bond index.  Even if the ETF itself is liquid (and thus has a fairly well-defined price), the price of the ETF may fail to track the index. This is a failure of the index, not the ETF.  Indeed, one great value the ETF is adding in this scenario is that it is giving a better measure of the index; a lot of "incorrect" values for the basket are being "refuted", including possibly one calculated by the index methodology, which most likely (in this scenario) is leaning too heavily on stale prices — too much pretense that the "price" of a bond that hasn't traded since Tuesday is the price at which it traded on Tuesday.

I would propose, in fact, that where an index (basket) might accommodate a more liquid market than many of its constituents, that the best way to define the index may be as the "price" of an associated ETF.  There are ways of tracking repeated sales (the Case-Shiller index does a pretty good job of this), but most of the time, if there's a clear conflict between the price of the ETF and the calculated value of the index, I'm better that the price of the ETF is more current and more correct.

Wednesday, March 8, 2017

business cycles through the years

Tens of thousands of years ago, "productivity" was pretty much a function of weather, and "business cycles", insofar as the term can be applied, followed productivity very closely, both in time and in linear correlation.  In the last couple centuries of the second millennium, a drop in expected future demand led to a pullback in investments, both in capital and in new labor relationships, resulting in unemployment, resulting in a drop in demand.  That we aren't living in caves eating berries owes much to our stockpile of capital, both physical and intellectual, but the greater the share of our economy that is devoted to producing durable goods (including capital), the more sensitive our near-term production is to expectations of slightly less near-term production, leading both to a greater variation in production resulting from a particular exogenous fluctuation in productivity and to a greater variation in production resulting from nothing at all.

I think I've mentioned a couple of times that I believe the most underappreciated macroeconomic stylized fact of my lifetime is the doubling of the portion of GDP going to depreciation, and I've even suggested (I believe) that this is part of why the 1990 and 2000 recessions were long and shallow rather than sharp and steep; perhaps without the shift away from producing long-duration capital, the 2008-2009 collapse would have been closer to the scale of 1929-1933, better response by the Federal Reserve notwithstanding.  Perhaps this is related to the general deceleration in growth over that time; perhaps it will also, though, improve our ability to make forecasts going forward, insofar as they depend more on "fundamentals" and less on self-fulfilling expectations.

Thursday, December 22, 2016

local optimization

Among the methodological similarities between physics and economics is the frequent use of optimization techniques; in fact, both disciplines often involve optimizing something they call a "Lagrangian", though the meaning of that word is rather different in the two subjects!  In both cases, though, there's some sense in which what is frequently sought is a partial optimum, rather than a full global optimum.

Suppose you have a marble in a bowl on a table, and you want to figure out where it goes.  Roughly speaking, you expect it to seek to lower its potential energy.  Usually, though, it will go toward the middle of the bowl, even though it would get a lower potential energy by jumping out of the bowl onto the floor.  Quantum systems tend to "do better" at finding a global minimum than classical systems; liquid helium, in its superfluid state, will actually climb out of bowls to find lower potential energy states.  Even quantum systems, though, often end up more or less in states where the first-order conditions are satisfied, rather than the actual global maximization problem.  This is perhaps most elegantly achieved with path-integrals; you associate a quantum mechanical amplitude with each point in your state space, make it undulate as a function of the optimand, and integrate it, and where the optimand isn't constant it cancels itself out, leaving only the effect of its piling up where the optimand satisfies the first-order conditions.

In economics and game theory, "equilibrium" will typically maximize agents' utility functions, each subject to variation only in the corresponding agent's choice variables; externalities are, somewhat famously, left out.  I'm tempted to try to apply a path-integral technique, but in game theory in particular the optimum is often at a "corner solution" where a constraint binds, and where the optimand doesn't therefore satisfy usual first-order conditions.  Something complicated with lagrange multipliers might be a possibility, but I suspect the use of (misleadingly named) "smoothed utility functions" will effectively do the same thing, but more easily.  I might then try to integrate "near" an equilibrium, but only in the dimensions corresponding to one particular agent's choice variables.

I wonder whether I can make something useful of that.

Wednesday, December 14, 2016

dividing systems

This will be a bit different, and may well not be terribly original, but I want to think about some epistemological issues (perhaps with some practical values) associated with dividing up complex systems into parts.

In particular, suppose a complex system is parameterized by a (large) set of coordinates, and the equations of motion are such as to minimize an expression in the coordinates and their first time derivatives as is typical of Lagrangians in physics; I'll simply refer to it as the Lagrangian going forward, though in some contexts it might be a thermodynamic potential or a utility function or the like.  A good division amounts to separating the coordinates into three subsets, say A, B, and C, where A and C at least are nonempty and the (disjoint) union of the three is the full set of coordinates of the system.  Given values of the coordinates (and their derivatives) in A and B, we can calculate the optimal value of the Lagrangian by optimizing over values that C can take, and similarly given B and C I can optimize over A, and I get effective Lagrangians for (A and B) and (B and C) respectively.  Where this works best, though, is where the optimizing coordinates in C (in the first case) or A (in the second) depend only on the coordinates in B; conditional on B, A and C are independent of each other.  This works even better if B is fairly small compared to both A and C, and might in that case even be quite useful if the conditional independence is only approximate.

In general there will be many ways to write the Lagrangian as LA+LB+LC+LAB+LBC+LAC+LABC, with each term depending only on coordinates in that set, but it will be particularly useful to write a Lagrangian this way if the last two terms are small or zero. If we are "optimizing out" the C coordinates, the effective Lagrangian for A and B is LA+LB+LAB plus what we get from optimizing LC+LBC+LAC+LABC; this will depend only on B if the terms with both A and C are absent.  Thus on some level a good decomposition of a system is one in which the Lagrangian can be written as LAB+LB+LBC, where I've absorbed previous LA and LC terms into the first and last terms; for given evolutions of B variables, the A variables will optimize LAB and the C variables will optimize LBC and these two optimizations can be done separately.

Wednesday, September 28, 2016

short sales

Matt Levine this morning writes (ctrl-F "blockchain") about what short sales would look like on a blockchain, and it's pretty straightforwardly correct; you lift the process we have now with all the sales taking place the way they do on a blockchain and get some of the additional transparency that comes with it. Fungibility on the blockchain is a bit less than it is without that transparency; one of the things being addressed specifically in his passage is that right now, if people "own" 110% of the outstanding shares of an issue, nobody knows whether their shares are among the 10% that in some sense don't count.

One of the things that's highlighted here, though, is that the short-sale concept is perhaps not what you would create if you were designing the market system top-down from whole cloth:
Just transfer 10 shares from B to A, in exchange for a smart contract to return them, and then sell those shares from A to C over the blockchain. Easy as blockchain. C now owns the shares on the blockchain's ledger, while A also "owns" them in the sense that she has a recorded claim to get them back from B.
This is how short-selling works; if A wants to sell short, A borrows the stock from someone (B) and then sells it to someone else (C).  If you introduce brokers, the way our current system works, the actual beneficial owner B won't even know that the shares have been lent out; both B and C think they own the shares.  The big change the blockchain makes is that, at least in principle, B can see that the "shares" B owns are actually an agreement by A to deliver them in the future.

There's some sense in which the borrow and sale are superfluous, though; the promise to (re-)deliver in the future is what you're trying to create by doing a short sale.  What you would think, from first principles in the absence of market structure concerns, would be the way to get there is let C buy the shares from B while A enters a forward contract with B, or, if C is just as happy to be on the receiving end of a forward contract, leave B out of it altogether and have a forward contract from A to deliver shares to C.  There are exchanges for stocks, and a less centralized market for lending securities, and these grew up (one and then the other) to facilitate short sales; in our current world, then, it's hard (especially for retail customers) to enter bilateral forward contracts, and the institutions for effecting the same result are set up to facilitate it in a somewhat baroque manner.  If you're moving to blockchain for settlement, and need to change the structure of the market to accommodate that, then
A blockchain would need to do something similar: let some people create new securities on the blockchain, but carefully control who gets that access.
doesn't seem to me like my first choice approach; what would make more sense to me would be a market in which buyers see offers to enter into forward contracts as well, and where the borrow gets left out altogether.

Tuesday, August 9, 2016

simplified heuristics and Bellman equations

An idea I've probably mentioned is that certain behavioral biases are perhaps simplifications that, on average, at least in the sort of environment in which the human species largely evolved, work very well.  We can write down our von Neumann / Morgenstern / Friedman / Savage axioms and argue that a decision-maker that is not maximizing expected utility (for some utility and some probability measure) is, by its own standards, making mistakes, but the actual optimization, in whatever sense it's theoretically possible with the agent's information, may be very complicated, and simple heuristics may be much more practical, even if they occasionally create some apparent inconsistencies.

Consider a standard dynamic programming (Bellman) style set-up: there's a state space, and the system moves around within the state space, with a transition function specifying how the change in state is affected by the agent's actions; the agent gets a utility that is a function of the state and the agent's action, and a rational agent attempts to choose actions to optimize not just the current utility, but the long-run utility.  Solving the problem typically involves (at least in principle) finding the value function, viz. the long-run utility that is associated with each state; where one action leads to a higher (immediate) utility than the other but favors states that have lower long-run utility, the magnitudes of the effects can be compared.  The value function comprises all the long-run considerations you need to make, and the decision-making process at that point is superficially an entirely myopic one, trying in the short-run to optimize the value function (plus, weighted appropriately, the short-run utility) rather than the utility alone.

A problem that I investigated a couple of years ago, at least in a somewhat simple setting, was whether the reverse problem could be solved: given a value function and a transition rule, can I back out the utility function?  It turns out that, at least subject to certain regularity conditions, the answer is yes, and that it's generally mathematically easier than going in the usual direction.  So here's a project that occurs to me: consider such a problem with a somewhat complex transition rule, and suppose I can work out (at least approximately) the value function, and then I take that value function with a much simpler transition function and try to work out a utility function that gives the same value function with the simpler transition function.  I have a feeling I would tend to reach a contradiction; the demonstration that I can get back the utility function supposed that it was in fact there, and if there is no such utility function I might find that the math raises some objection.  If there is such a utility function that exactly solves the problem, of course, I ought to find it, but there seems to me at least some hope that, even if there isn't, the math along the way will hint how to find a utility function, preferably a simple one, that gives approximately the same value function.  This, then, would suggest that a seemingly goal-directed agent pursuing a comparatively simple goal would behave the same way as the agent pursuing the more complicated goal.

cf. Swinkels and Samuelson (2006): "Information, evolution and utility," Theoretical Economics, 1(1): 119--142, which pursues the idea that a cost in complication in animal design would make it evolutionarily favorable for the animal to be programmed directly to seek caloric food, for example, rather than assess at each occasion whether that's the best way to optimize long-run fecundity.

Wednesday, July 27, 2016

policing police

This is a bit outside the normal bailiwick of this blog, but is the sort of off-the-wall, half-baked idea that seems to fit here at least in that way.

Police work, at least as done in modern America, requires special authority, sometimes including the authority to use force in ways that wouldn't be allowed to a private citizen; sometimes the police make mistakes, and it is important to create systems that reduce the likelihood of that, but allowances also need to be made that they are human beings put in situations where they are likely to believe they lawfully have certain authority; if a police officer arrests an innocent man, the officer will face no legal repercussions, while a private citizen would, even if the private citizen had "reasonable cause" to suspect the victim.  It is appropriate that this leeway be made, at least as for legal repercussions; if a particular police officer shows a pattern of making serious mistakes, even if they are clearly well-intended, it is just common sense[1] that that officer should be directed to more suitable employment, but being an officer trying to carry out the job in good faith should be a legal defense to criminal charges.

That extra authority, though, comes — morally if not legally — with a special duty not to intentionally abuse it.  This is the case not least because the task of police work is much more feasible where the citizens largely trust that an order appearing to come from a police officer is lawful than where they don't.  A police officer in Alabama was reported, not long ago, to have sexually assaulted someone he had detained, and in a situation like that the initial crime is additional to the societal cost of eroding trust people have that the officer is at least trying to be on the side of law.  This erosion of trust is also the primary reason that impersonating a police officer is a serious crime.[2]  I propose, then, upon the showing of mens rea in the commission of a serious crime by a police officer while using that office to facilitate the crime, that the officer be fired retroactively --- and brought up additionally on the impersonation charges.[3]




[1] I mean, it should be.  My impression is that it is too difficult to remove bad cops, but that's not an especially well-informed impression.

[2] Pressed to give secondary reasons, they would also line up pretty well between impersonating an officer and abusing the office.

[3] This policy would have an interesting relationship to the "no true Scotsman fallacy"; no true police officer would intentionally commit a heinous crime, and we'll redefine who was an officer when if we have to to make it true.