Tuesday, July 12, 2016

risk and uncertainty

A century ago, an economist named Frank Knight wrote a book on "Risk and Uncertainty", where by "risk" he meant what economists generally alternate between calling "risk" and "uncertainty" today and by "uncertainty" he meant something economists haven't given as much attention in the past seventy years, but have tended to call "ambiguity" when they do.[1]  The distinction is how well the relevant ignorance can be quantified; a coin toss is "risky", rather than "ambiguous", because we have pretty high confidence that the "right" probability is 50%, while the possibility of a civil war in a developed nation in the next ten years is perhaps better described as "ambiguous".  Here is a link to the wikipedia page on the Ellsberg paradox.  Weather in the next few days would have been "ambiguous" when Knight wrote, but was becoming risky, and is well quantifiable these days.

Perhaps one of the reasons the study of ambiguity fell out of favor, and has largely stayed there for more than half a century since then,[2] is that a strong normative case for the assignment of probabilities to events was developed around World War II; in short, there is a set of appealing assumptions about how a person would behave that imply that they would act so as to maximize "expected utility", where "utility" is a real-valued function of the outcome of the individual's actions and "expected" means some kind of weighted average over possible outcomes.  In perhaps simpler terms, if a reasonably intelligent person who understands the theorem were presented with actions that person had taken that were not all consistent with expected utility maximization, that person would probably say, "Yeah, I must have made a mistake in one of those decisions," though it would probably still be a matter of taste as to which of the decisions was wrong.

To be a bit more concrete, suppose an entrepreneur is deciding whether or not to build a factory.  The factory is likely to be profitable under some scenarios and unprofitable under others, and the entrepreneur will not know for sure which will obtain; if certain risks are likelier than some threshold, though, building the factory will have been a bad idea, and if they're less likely, than it will have been a good idea.  Whether or not the factory is built, then, implies at least a range of probabilities that the entrepreneur must impute to the risks; an entrepreneur making other decisions that are bad for any of those probabilities is making a mistake somewhere, such that changing multiple decisions guarantees a better outcome, though which decision(s) should be changed may still be up for debate (or reasoned assessment).  The rejoinder, then, to the assertion that a probability can't be put on a particular event, is that often probabilities are, at least implicitly, being put on unquantifiable events; it is certainly not necessarily the case that the best way to make those decisions is to start by trying to put probabilities on the risks, but it probably is worth trying to make sure that there is some probabilistic outlook that is consistent with the entire schedule of decisions, and, if there isn't, to consider which decisions are likely to be in error.[3]

There is a class of situations, though, in which something that resembles "ambiguity aversion" makes a lot of sense, and that is being asked to (in some sense) quote a price for a good in the face of adverse selection.  If, half an hour after a horse race, you remark to someone "the favored horse probably won," and she says, "You want to bet?", then, no, you don't.  In general, I should suppose that other people have some information that I don't, and if I expect that they have a lot of information that I don't, then my assessment of the value of an item or the probability of an event may be very different if I condition on some of their information than if I don't; if I set a price at which I'm willing to sell, and can figure out from the fact that someone is willing to buy at that price that I shouldn't have sold at that price, I'm setting the price too low, even if it's higher than I initially think is "correct".

In a lot of contexts in which people seem to be avoiding "ambiguity", this may well fit a model of a certain willingness to accept other's probability assessments; e.g. I'm not willing to bet at any price on a given proposition, because, conditional on others' assessments, my assessment is very close to theirs.


[1] There's a nonzero chance that I have his terms backward, but that nonzero chance is hard to quantify; in any case, the concepts here are what they are, and I'll try to keep my own terminology mostly consistent with itself.

[2] I'm pretty sure Pesendorfer and/or Gul, both of whom I believe are or were at Princeton, have produced some models since the turn of the millennium attempting to model "ambiguity aversion", and I should probably read Stauber (2014): "A framework for robustness to ambiguity of higher-order beliefs," International Journal of Game Theory, 43(3): 525--550.  This isn't quite my field.

[3] In certain institutional settings, certain seemingly unquantifiable events may be very narrowly pinned down; I mostly have in mind events that are tied to options markets.  If a company has access to options markets on a particular event, it is likely that there is a probability above which not buying (more) options is a mistake, and another below which not selling (more) options is a mistake, and those probabilities may well be very close to each other.  If you think you should build a factory, and the options-implied probability suggests you shouldn't, buying the options instead might strictly dominate building the factory; if you think you shouldn't and the market thinks you should, your best plan might be to build the factory and sell the options.

No comments: