Wednesday, January 22, 2014

high-frequency trading and combinatorial auctions

I tend to support a lot of proposals that would somewhat rein-in certain high-frequency trading behaviors; one proposal with some prominent proponents is a system of frequent batch auctions, though I prefer one of my professors' suggestions that orders not be eligible to be cancelled until one second after they are placed. I would also like to note in this context that the discussion of this as a policy question seems usually to conflate regulators with the private institutions running exchanges; a first step by regulators would be to weaken Reg NMS such that privately-run exchanges are clearly allowed to implement these sorts of rules in a way that is not obviously self-defeating; if that proved "insufficient" (in someone's judgment), I hope it would only be later that the costs and benefits of requiring new rules in every market be imposed.

Some of the defenders of high-frequency trading argue that it provides liquidity, and indeed I think it is likely the case that they provide liquidity to somewhat less-high-frequency traders, who in turn provide liquidity to even-lower-frequency traders, and that perhaps that feeds through to a net advantage to savers looking for borrowers and borrowers looking for savers to convert, finally, real goods and services now into real goods and services in the future in an efficient, pro-social way. I think the final ends, though, are very little impeded by gumming up the works at a one-second time scale, and I do believe that a lot of the real investments of real resources into reducing latency and trying to out-strategize the other high-frequency players probably amounts more to wasteful rent-seeking than to socially beneficial creation of infrastructure to improve capital allocation. Some of the opponents of high-frequency traders like to highlight the existence of flash-crashes — which, at some scale, are fairly common, though usually smaller than the most famous such event. I'm not particularly concerned by that, though, and, if that seemed to me like the biggest problem with high-frequency trading, my prescription would probably start and end with the requirement that "market" orders for retail customers be submitted as IOC limit orders at a price that is some multiple (nearish 1) of the price 15 minutes before the order is entered — caveat venditor beyond that.

The summary of the set-up, then, is that I support moderate efforts to curtail high-frequency trading, though I don't think much of some of the arguments that one hears for and against such efforts. What follows, then, is ... well, it's not exactly Devil's Advocacy, but it's an argument that seems plausibly to be stronger than any of these other arguments, and, if someone could convince me that the details I can't work out would point in that direction, I would change my position, or at least modify what sorts of changes I would support.

What I want to note, first, is that markets are incomplete, and that the values of different traded securities are related to each other in possibly occasionally complicated ways. What this means is that, given some (potential) saver's information and risk preferences, in principle that agent could exhibit a fairly complicated demand curve for securities — the price of one security might affect one's demand for another security in an important way, especially if that price is taken to represent the result of some quasi-equilibrium process involving other agents with other information. If markets were complete, this wouldn't be true in any important way, but markets (obviously) are not, and I suspect that this is somewhat important. In some ideal setting, then, we wouldn't run the exchanges for different securities independently of each other; we would run them all together as one big combinatorial auction.

There are at least two problems with big combinatorial auctions, at least of the sort a naive but brilliant mathematician might construct. One is that they are computationally very intense; the complexity is exponential in the number of items being "auctioned", and with thousands of listed securities, we really don't have the computing power to pull this off. The other is that the agents being called upon to submit "demand curves" would themselves face a problem that is way beyond their own ken; they would be asked to specify some possibly complex rule for converting systems of prices into systems of quantities. Even to the extent that people have an intuitive sense of what they think they would like to buy, they tend to be bad at working with more than the simplest of hypotheticals, and it's hard to imagine a language that would allow them to easily but flexibly express what they want to the central computer.

Modern combinatorial auctions address both of these problems, possibly giving up some of the pure perfection that might in principle be available from the impossible straightforward approach by generating a practical and, based on substantial evidence, very good iterative process instead. One of the most popular modern combinatorial auctions is the combinatorial clock auction, sometimes (as at the link) with a finalizing stage, which presents each agent with a much more tractible problem at each step (a single set of hypothetical prices, basically, rather than all possible sets of hypothetical prices), and leaves the computer with a much more tractible problem at each step as well. Details aside, even though the overall algorithm is designed explicitly to handle the fact that agents' values for combinations of items will not simply be a sum of values for each item separately, the iterative process sort of looks as though it's treating each item separately; it's the dynamic nature of the procedure that ultimately takes account of the "complementarity".

Practical combinatorial auctions usually go fewer than 100 rounds (I think), but I have seen results in the matching literature in which an attempt to decentralize the "matching" procedure can require huge number of rounds — typically on the order of the square of the number of agents. What these models have in common with the situations in which combinatorial auctions have found use is that the preferences, while they (in principle) exist in high-dimensional space, occupy a low-dimensional manifold in that space; they are reasonably tightly constrained, even if the constraint isn't "independent values". It's easy for me to imagine that a system with thousands of items and truly complex relationships between them that one might require billions or trillions of iterations to get good convergence to a value near "equilibrium" — and that, short of that, doubling the number of iterations might improve the result by a substantial amount.

If someone can convince me that the incompleteness of the financial markets is ameliorated by something like the current market structure, viewed as a combinatorial auction with many rounds per second, and that reducing the frequency of the "rounds" would significantly reduce the extent to which prices are "correct", then I would at the very least want to be careful about changing the rules.

No comments: