I've been a bit obsessed with neural networks in the past two months. "Neural networks" constitute a class of models in which one seeks to generate an output from inputs by way of several intermediate stages (which might be loosely thought of as analogous to "neurons"); each internal node takes some of the inputs and/or outputs of other internal nodes and produces an output between 0 and 1, which then may be fed on as input to other nodes or may become the output of the network. With an appropriately designed network, one can model very complicated functions, and if one has a lot of data that are believed to have some complex relationship, trying to tune a neural network to the data is a reasonable approach to modeling the data.
This "tuning" takes place, typically, by a learning process in which the data in the sample are taken, often one at a time (in which case one would typically iterate over the data set multiple times), with some candidate network weights (typically initially random) are used to calculate the modeled output from the given input; the given output (associated with the data point) is then compared to the modeled output. The network is designed such that it is relatively efficient to figure out how the modeled output would have differed for a slightly different value for each weight — one effectively has partial derivatives of the output with respect to each weight in the network — and a certain amount of crude nudging is done to the entire network, more gently if the modeled output was already close to the actual target output, more forcefully if it was farther away, but generically in the direction of making the network predict that point a bit better if it sees it again.
One of the problems with neural networks — and one I'm going to exacerbate rather than ameliorate or exploit — is that the resulting internal nodes typically have no intuitive meaning. You design your network with some flexibility, you go through the data set tuning the weights to the data, you come back with perhaps a decent ability to predict out-of-sample, but if someone asks "why is it predicting 0.9 this time?" the answer is "well, that's what came out of the network", and it's typically hard to say much else. They may still be useful in somewhat indirect manners as a tool for actually understanding your data generating process, but even if they can perfectly model the sample data, they at best provide descriptive dimensional reduction from which direct inference is essentially impossible.
Now, I've been interested in prediction markets for longer than I have neural networks, though perhaps less intensively, especially in the recent past. Prediction markets typically are intended to aggregate different kinds of information into a sort of best consensus prediction of, typically, the probability of an event. If different people have different kinds of information, then combinatorial markets are valuable; if I have very good reason to believe that the event in question will occur provided some second event occurs, and someone else has very good reason to believe that the second event will occur, then it may be that neither of us is willing to bet in the primary market, but if a second market is set up on the probability of the secondary event, the other guy bids up the probability in that market and I can bid up the primary market, hedging in the secondary market. (A true combinatorial market would work better than this, but this is the basic idea.) In principle, a large, perfectly functioning combinatorial market should be able to make pretty good predictions by aggregating all kinds of different information in the appropriate ways, such that the market itself makes predictions that elude any single participant. (cf. a test project for this sort of thing and some general background on that market in particular.) The more relevant "partial calculation" markets are available, the better the ability to aggregate to a particular event prediction is likely to be.
There would be some details to work out, but it seems to me one might be able to create a neural network in which the node outputs are not specific functions per se of the inputs, but are simply the result of betting markets. Participants would be paid a function of their bet, the output, and the inputs, as well as the final (aggregate network) realized ("target") result. If you make a bet on a particular node, the incentives should be similar to the "tuning" step in the neural network problem: you should be induced to push the output of that node in such a direction that the markets downstream from you are likely to lead to a better overall prediction.
It's quite possible there's actually no way to do this; I haven't worked it out in even approximate detail. It's also possible that it is possible in principle, but there would be no way to get intelligent participation because of the conceptual problem with neural network nodes that I mentioned before. If it did work, in fact, I suspect it would do so by means of participants gradually attributing meanings to different nodes. If this could be made to work in something like the spirit in which neural networks are usually done, this would improve slightly on the combinatorial markets in that
- The intermediate markets would be created endogenously by the market; that is, something the market designer didn't think of as a relevant partial calculation might end up being ascribed to a node because market participants had reason to believe it should be; and
- These intermediate markets may not need a clear (external) "fix"; that is, some events are hard to define, but as long as participants have a sense of what an event means, it doesn't have to be made precise.
No comments:
Post a Comment