Conflict vs. mistake in non-zero-sum games | LessWrong 2.0

Source: Conflict vs. mistake in non-zero-sum games | LessWrong 2.0, by Nisan

Summary: Whether you behave like a mistake theorist or a conflict theorist may depend more on your negotiating position in a non-zero-sum game than on your worldview.

Plot the payoffs in a non-zero-sum two-player game, and you’ll get a set with the Pareto frontier on the top and right. You can describe this set with two parameters: The surplus is how close the outcome is to the Pareto frontier, and the allocation tells you how much the outcome favors player 1 versus player 2.

It’s tempting to decompose the game into two phases: A cooperative phase, where the players coordinate to maximize surplus; and a competitive phase, where the players negotiate how the surplus is allocated.

Of course, in the usual formulation, both phases occur simultaneously. But this suggests a couple of negotiation strategies where you try to make one phase happen before the other:

  1. “Let’s agree to maximize surplus. Once we agree to that, we can talk about allocation.”
  2. “Let’s agree on an allocation. Once we do that, we can talk about maximizing surplus.”

I’m going to provocatively call the first strategy mistake theory, and the second conflict theory.

Now I don’t have a good model of negotiation. But intuitively, it seems that mistake theory is a good strategy if you think you’ll be in a better negotiating position once you move to the Pareto frontier. And conflict theory is a good strategy if you think you’ll be in a worse negotiating position at the Pareto frontier.

If you’re naturally a mistake theorist, this might make conflict theory seem more appealing. Imagine negotiating with a paperclip maximizer over the fate of billions of lives. Mutual cooperation is Pareto efficient, but unappealing. It’s more sensible to threaten defection in order to save a few more human lives, if you can get away with it.

It also makes mistake theory seem unsavory: Apparently mistake theory is about postponing the allocation negotiation until you’re in a comfortable negotiating position. (Or, somewhat better: It’s about tricking the other players into cooperating before they can extract concessions from you.)

This is kind of unfair to mistake theory, which is supposed to be about educating decision-makers on efficient policies and building institutions to enable cooperation. None of that is present in this model.

But I think it describes something important about mistake theory which is usually rounded off to something like “[mistake theorists have] become part of a class that’s more interested in protecting its own privileges than in helping the poor or working for the good of all”.

It’s Not Enough to Be Right. You Also Have to Be Kind. | Forge (Medium)

Source: It’s Not Enough to Be Right. You Also Have to Be Kind. | Forge (Medium), by Ryan Holiday

the central conceit of a dangerous assumption we seem to have made as a culture these days: that being right is a license to be a total, unrepentant asshole.

I thought if I was just overwhelmingly right enough, I could get people to listen. … Yet, no amount of yelling or condescension or trolling is going to fix any of this. It never has and never will.

it’s so much easier to be certain and clever than it is to be nuanced and nice. … But putting yourself in their shoes, kindly nudging them to where they need to be, understanding that they have emotional and irrational beliefs just like you have emotional and irrational beliefs—that’s all much harder. So is not writing off other people.

Crony Beliefs | Melting Asphalt, by Kevin Simler

Source: Crony Beliefs | Melting Asphalt, by Kevin Simler

One of my main goals for writing this essay has been to introduce two new concepts — merit beliefs and crony beliefs — that I hope make it easier to talk and reason about epistemic problems. … it’s important to remember that merit beliefs aren’t necessarily true, nor are crony beliefs necessarily false. What distinguishes the two concepts is how we’re rewarded for them: via effective actions or via social impressions.

 

I found Kevin’s introduction of his concepts of merit and crony beliefs to be interesting and potentially useful, and I recommend reading the rest of his post. However, I complain that his “Identifying Crony Beliefs” and “J’accuse” sections sometimes confuse his merit/crony beliefs concept(s) with the immediacy and severity of potential consequences:

I disagree that “perhaps the biggest hallmark of epistemic cronyism is exhibiting strong emotions … These emotions have no business being within 1000ft of a meritocratic belief system”. For example, if I am with another person in a vehicle (as driver or passenger) approaching an intersection at speed, I probably have a strong opinion about whether or not my vehicle should be breaking to stop at the intersection or not; I will also have strong emotions if the other person is insisting that I am wrong. Similarly, I have no strong feelings about whether or not X, but the only conceivable value to believing that would be social, not practical, so it must be a crony belief.

Strong feelings are indicative of a belief’s high consequential value (positive or negative, social or practical), not of a belief’s social-ness.

Book Review: The Secret Of Our Success | Slate Star Codex

Source: Book Review: The Secret Of Our Success | Slate Star Codex, by Scott Alexander

RE: Tradition is Smarter Than You Are | The Scholar’s Stage (book review of The Secret Of Our Success), by Tanner Greer

RE: The Secret Of Our Success, by anthropologist Joseph Henrich

“Culture is the secret of humanity’s success” sounds like the most vapid possible thesis. The Secret Of Our Success by anthropologist Joseph Henrich manages to be an amazing book anyway.

Henrich wants to debunk (or at least clarify) a popular view where humans succeeded because of our raw intelligence. In this view, we are smart enough to invent neat tools that help us survive and adapt to unfamiliar environments.

Against such theories: we cannot actually do this. Henrich walks the reader through many stories about European explorers marooned in unfamiliar environments. These explorers usually starved to death. They starved to death in the middle of endless plenty. Some of them were in Arctic lands that the Inuit considered among their richest hunting grounds. Others were in jungles, surrounded by edible plants and animals. One particularly unfortunate group was in Alabama, and would have perished entirely if they hadn’t been captured and enslaved by local Indians first.

Hunting and gathering is actually really hard.

Rationalists always wonder: how come people aren’t more rational? How come you can prove a thousand times, using Facts and Logic, that something is stupid, and yet people will still keep doing it?

Henrich hints at an answer: for basically all of history, using reason would get you killed.

Humans evolved to transmit culture with high fidelity. And one of the biggest threats to transmitting culture with high fidelity was Reason. Our ancestors lived in Epistemic Hell, where they had to constantly rely on causally opaque processes with justifications that couldn’t possibly be true, and if they ever questioned them then they might die. Historically, Reason has been the villain of the human narrative, a corrosive force that tempts people away from adaptive behavior towards choices that “sounded good at the time”.

Why are people so bad at reasoning? For the same reason they’re so bad at letting poisonous spiders walk all over their face without freaking out. Both “skills” are really bad ideas, most of the people who tried them died in the process, so evolution removed those genes from the population, and successful cultures stigmatized them enough to give people an internalized fear of even trying.

 

More:
Epistemic Learned Helplessness | Slate Star Codex, by Scott Alexander
Asymmetric Weapons Gone Bad | Slate Star Codex, by Scott Alexander

The Sea Was Not a Mask — Real Life

Source: The Sea Was Not a Mask — Real Life, by Rob Horning

Does more “extreme” content compel the most compulsive viewing, or are we only concerned with compulsive viewing when the content has antisocial overtones? In other words, when YouTube fine-tunes its algorithms, is it trying to end compulsive viewing, or is it merely trying to make people compulsively watch nicer things? … The idea that YouTube shouldn’t force-feed users content at all is, of course, not considered.

The assumption built into YouTube (and Netflix and Spotify and TikTok and all the other streaming platforms that queue more content automatically) is that users want to consume flow, not particular items of content. Flow and not content secures an audience to broker to advertisers. … [The compulsivity of flow] is so pervasive as to almost seem inescapable — from “page-turners” to bingeable shows to endlessly refreshable scrolls to autoplaying music and autopopulating playlists. It is usually depicted as a selling point, a proof of quality — you can’t put it down! — but that shouldn’t disguise the fact that what’s being sold is surrender: Engage with this thing so you can stop worrying about what to engage with. That is flow. … Flow allows us to experience our agency without exactly exercising it. It blurs the lines between those things.

Flow, fundamentally, is a trap — as anthropologist Nick Seaver details, that means it is a “persuasive technology” that can condition prey “to play the role scripted for it in its design.” Traps work, he argues, by making coercion appear as persuasion: Animals aren’t forced into the trap; its design makes them choose it. Coercion and persuasion, then, can’t be cleanly distinguished. … We are neither forced to consume more nor choosing to consume more; we both want the particular units of content and are indifferent to them. We are both active agents and passive objects. … Flow works by disguising its compulsory mechanism in the details of its content, which is nothing more than bait from the system’s perspective.

[Are] certain kinds of content are especially suited to this blurring? How do we become addicted to the spectacle of our consumption, as an emblem of our own singularity? Does it take particular kinds of content? Does certain kinds of antisocial content make that spectacle more potent and compulsive? Does pursuing information that other people reject or that seems hidden or secret intrinsically make the pursuer aware of their own agency, of their ability to redraw the epistemic frame?