Thinking about basics again, and it seems like a framework for the basics of costly signalling that is slightly more general than the average textbook version might be of some value.
The basics that an agent has some piece of information that it would like to credibly communicate, and has available a set of possible actions, some of which would directly lead to a lower payoff, but especially if the piece of information were false; as long as my gain from being believed exceeds the cost if my message is true, but is less than the cost if my message is false, then I can credibly and profitably use those actions to communicate my information so that other agents will behave in a way that helps me recoup my signaling cost.[1]
There are a variety of things I might like to incorporate into this, and what I'm particularly contemplating right now is something mechanism designish: if a designer can change the set of actions available and/or their costs, which such changes will improve welfare? I think that the most interesting thing to note that requires a moment's thought but not a deep analysis is that, while reducing the costs of signalling seems like a good idea, everything falls apart if it becomes cheap to signal the information when it's false — unless the reduction in cost fully compensates you for being unable to communicate credibly, at least. The clearest beneficial case, then, would be one in which you can make signalling cheaper when it's true, but without reducing the cost of sending a false signal.
I might want the information to be continuous, or at least richer than binary. In that case, you're likely to get "partially pooling equilibria", such that if the agent wants it to be believed that a parameter is large, the agent behaves with some randomness, with some overlap in behavior between situations in which the parameter is small and when it's in-between, ultimately leading observers to make a higher guess for the value of the parameter when they see a "higher value" sort of signal, but not putting full confidence in it. The mechanism designer then is likely to face a choice in which a lower cost of signalling in general makes the signals less informative, resulting in some knock-on inefficiency that has to be weighed against the direct cost.
[1] You could also have the cost of signalling be the same, regardless of truth, but the benefits of being believed higher when it's true; again, the sign of the net benefit should be positive if it's true and negative if it's false.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment