Source: Conflict vs. mistake in non-zero-sum games | LessWrong 2.0, by Nisan
Summary: Whether you behave like a mistake theorist or a conflict theorist may depend more on your negotiating position in a non-zero-sum game than on your worldview.
Plot the payoffs in a non-zero-sum two-player game, and you’ll get a set with the Pareto frontier on the top and right. You can describe this set with two parameters: The surplus is how close the outcome is to the Pareto frontier, and the allocation tells you how much the outcome favors player 1 versus player 2.
…
It’s tempting to decompose the game into two phases: A cooperative phase, where the players coordinate to maximize surplus; and a competitive phase, where the players negotiate how the surplus is allocated.
Of course, in the usual formulation, both phases occur simultaneously. But this suggests a couple of negotiation strategies where you try to make one phase happen before the other:
- “Let’s agree to maximize surplus. Once we agree to that, we can talk about allocation.”
- “Let’s agree on an allocation. Once we do that, we can talk about maximizing surplus.”
I’m going to provocatively call the first strategy mistake theory, and the second conflict theory.
…
Now I don’t have a good model of negotiation. But intuitively, it seems that mistake theory is a good strategy if you think you’ll be in a better negotiating position once you move to the Pareto frontier. And conflict theory is a good strategy if you think you’ll be in a worse negotiating position at the Pareto frontier.
If you’re naturally a mistake theorist, this might make conflict theory seem more appealing. Imagine negotiating with a paperclip maximizer over the fate of billions of lives. Mutual cooperation is Pareto efficient, but unappealing. It’s more sensible to threaten defection in order to save a few more human lives, if you can get away with it.
It also makes mistake theory seem unsavory: Apparently mistake theory is about postponing the allocation negotiation until you’re in a comfortable negotiating position. (Or, somewhat better: It’s about tricking the other players into cooperating before they can extract concessions from you.)
This is kind of unfair to mistake theory, which is supposed to be about educating decision-makers on efficient policies and building institutions to enable cooperation. None of that is present in this model.
But I think it describes something important about mistake theory which is usually rounded off to something like “[mistake theorists have] become part of a class that’s more interested in protecting its own privileges than in helping the poor or working for the good of all”.