Orthodox Privilege, by Paul Graham

Source: Orthodox Privilege, by Paul Graham

There has been a lot of talk about privilege lately. Although the concept is overused, there is something to it, and in particular to the idea that privilege makes you blind — that you can’t see things that are visible to someone whose life is very different from yours.

But one of the most pervasive examples of this kind of blindness is one that I haven’t seen mentioned explicitly. I’m going to call it orthodox privilege: The more conventional-minded someone is, the more it seems to them that it’s safe for everyone to express their opinions.

It’s safe for them to express their opinions, because the source of their opinions is whatever it’s currently acceptable to believe. So it seems to them that it must be safe for everyone. They literally can’t imagine a true statement that would get them in trouble.

And yet at every point in history, there were true things that would get you in terrible trouble to say. Is ours the first where this isn’t so? What an amazing coincidence that would be.

It doesn’t seem to conventional-minded people that they’re conventional-minded. It just seems to them that they’re right. Indeed, they tend to be particularly sure of it.

If you believe there’s nothing true that you can’t say, then anyone who gets in trouble for something they say must deserve it.

How To Understand Things, by Nabeel Qureshi

Source: How To Understand Things, by Nabeel Qureshi

What we call ‘intelligence’ is as much about virtues such as honesty, integrity, and bravery, as it is about ‘raw intellect.’

Intelligent people simply aren’t willing to accept answers that they don’t understand — no matter how many other people try to convince them of it, or how many other people believe it, if they aren’t able to convince them selves of it, they won’t accept it.

One component of it is energy: thinking hard takes effort, and it’s much easier to just stop at an answer that seems to make sense, than to pursue everything that you don’t quite get down an endless, and rapidly proliferating, series of rabbit holes. … But it’s not just energy. You have to be able to motivate yourself to spend large quantities of energy on a problem, which means on some level that not understanding something — or having a bug in your thinking — bothers you a lot. You have the drive, the will to know.

Related to this is honesty, or integrity: a sort of compulsive unwillingness, or inability, to lie to yourself.

Another quality I have noticed in very intelligent people is being unafraid to look stupid. … Most people are not willing to do this — looking stupid takes courage, and sometimes it’s easier to just let things slide.

The best thing I have read on really understanding things is the Sequences, especially the section on Noticing Confusion.

understanding is not a binary “yes/no”. It has layers of depth.

Conflict vs. mistake in non-zero-sum games | LessWrong 2.0

Source: Conflict vs. mistake in non-zero-sum games | LessWrong 2.0, by Nisan

Summary: Whether you behave like a mistake theorist or a conflict theorist may depend more on your negotiating position in a non-zero-sum game than on your worldview.

Plot the payoffs in a non-zero-sum two-player game, and you’ll get a set with the Pareto frontier on the top and right. You can describe this set with two parameters: The surplus is how close the outcome is to the Pareto frontier, and the allocation tells you how much the outcome favors player 1 versus player 2.

It’s tempting to decompose the game into two phases: A cooperative phase, where the players coordinate to maximize surplus; and a competitive phase, where the players negotiate how the surplus is allocated.

Of course, in the usual formulation, both phases occur simultaneously. But this suggests a couple of negotiation strategies where you try to make one phase happen before the other:

  1. “Let’s agree to maximize surplus. Once we agree to that, we can talk about allocation.”
  2. “Let’s agree on an allocation. Once we do that, we can talk about maximizing surplus.”

I’m going to provocatively call the first strategy mistake theory, and the second conflict theory.

Now I don’t have a good model of negotiation. But intuitively, it seems that mistake theory is a good strategy if you think you’ll be in a better negotiating position once you move to the Pareto frontier. And conflict theory is a good strategy if you think you’ll be in a worse negotiating position at the Pareto frontier.

If you’re naturally a mistake theorist, this might make conflict theory seem more appealing. Imagine negotiating with a paperclip maximizer over the fate of billions of lives. Mutual cooperation is Pareto efficient, but unappealing. It’s more sensible to threaten defection in order to save a few more human lives, if you can get away with it.

It also makes mistake theory seem unsavory: Apparently mistake theory is about postponing the allocation negotiation until you’re in a comfortable negotiating position. (Or, somewhat better: It’s about tricking the other players into cooperating before they can extract concessions from you.)

This is kind of unfair to mistake theory, which is supposed to be about educating decision-makers on efficient policies and building institutions to enable cooperation. None of that is present in this model.

But I think it describes something important about mistake theory which is usually rounded off to something like “[mistake theorists have] become part of a class that’s more interested in protecting its own privileges than in helping the poor or working for the good of all”.

It’s Not Enough to Be Right. You Also Have to Be Kind. | Forge (Medium)

Source: It’s Not Enough to Be Right. You Also Have to Be Kind. | Forge (Medium), by Ryan Holiday

the central conceit of a dangerous assumption we seem to have made as a culture these days: that being right is a license to be a total, unrepentant asshole.

I thought if I was just overwhelmingly right enough, I could get people to listen. … Yet, no amount of yelling or condescension or trolling is going to fix any of this. It never has and never will.

it’s so much easier to be certain and clever than it is to be nuanced and nice. … But putting yourself in their shoes, kindly nudging them to where they need to be, understanding that they have emotional and irrational beliefs just like you have emotional and irrational beliefs—that’s all much harder. So is not writing off other people.

Crony Beliefs | Melting Asphalt, by Kevin Simler

Source: Crony Beliefs | Melting Asphalt, by Kevin Simler

One of my main goals for writing this essay has been to introduce two new concepts — merit beliefs and crony beliefs — that I hope make it easier to talk and reason about epistemic problems. … it’s important to remember that merit beliefs aren’t necessarily true, nor are crony beliefs necessarily false. What distinguishes the two concepts is how we’re rewarded for them: via effective actions or via social impressions.


I found Kevin’s introduction of his concepts of merit and crony beliefs to be interesting and potentially useful, and I recommend reading the rest of his post. However, I complain that his “Identifying Crony Beliefs” and “J’accuse” sections sometimes confuse his merit/crony beliefs concept(s) with the immediacy and severity of potential consequences:

I disagree that “perhaps the biggest hallmark of epistemic cronyism is exhibiting strong emotions … These emotions have no business being within 1000ft of a meritocratic belief system”. For example, if I am with another person in a vehicle (as driver or passenger) approaching an intersection at speed, I probably have a strong opinion about whether or not my vehicle should be breaking to stop at the intersection or not; I will also have strong emotions if the other person is insisting that I am wrong. Similarly, I have no strong feelings about whether or not X, but the only conceivable value to believing that would be social, not practical, so it must be a crony belief.

Strong feelings are indicative of a belief’s high consequential value (positive or negative, social or practical), not of a belief’s social-ness.