Source: Inadequate Equilibria by Eliezer Yudkowsky
A new exploration of systemic failure and uncommon success.
If you want to outperform—if you want to do anything not usually done—then you’ll need to conceptually divide our civilization into areas of lower and greater competency. My view is that this is best done from a framework of incentives and the equilibria of those incentives—which is to say, from the standpoint of microeconomics.
distinguish between the standard economic concept of efficiency (as in efficient pricing) and the related but distinct concepts of inexploitability and adequacy, which are what usually matter in real life.
…
an efficient market, from an economist’s perspective, is just one whose average price movement can’t be predicted by you. … it’s possible that the concept should be called “relative efficiency.” … Today’s markets may not be efficient relative to the smartest hedge fund managers, or efficient relative to corporate insiders with secret knowledge that hasn’t yet leaked. But the stock markets are efficient relative to you, and to me
…So the distinction is:
- Efficiency: “Microsoft’s stock price is neither too low nor too high, relative to anything you can possibly know about Microsoft’s stock price.”
- Inexploitability: “Some houses and housing markets are overpriced, but you can’t make a profit by short-selling them, and you’re unlikely to find any substantially underpriced houses—the market as a whole isn’t rational, but it contains participants who have money and understand housing markets as well as you do.”
- Adequacy: “Okay, the medical sector is a wildly crazy place where different interventions have orders-of-magnitude differences in cost-effectiveness, but at least there’s no well-known but unused way to save ten thousand lives for just ten dollars each, right? Somebody would have picked up on it! Right?!”
One systemic problem can often be overcome by one altruist in the right place. Two systemic problems are another matter entirely.
Usually when we find trillion-dollar bills lying on the ground in real life, it’s a symptom of (1) a central-command bottleneck that nobody else is allowed to fix, as with the European Central Bank wrecking Europe, or (2) a system with enough moving parts that at least two parts are simultaneously broken, meaning that single actors cannot defy the system. To modify an old aphorism: usually, when things suck, it’s because they suck in a way that’s a Nash equilibrium.
In the same way that inefficient markets tend systematically to be inexploitable, grossly inadequate systems tend systematically to be unfixable by individual non-billionaires.
A critical analogy between an inadequate system and an efficient market is this: even systems that are horribly inadequate from our own perspective are still in a competitive equilibrium. There’s still an equilibrium of incentives, an equilibrium of supply and demand … There’s no free energy anywhere in the system.
Where reward doesn’t follow success, or where not everyone can individually pick up the reward, institutions and countries and whole civilizations can fail at what is usually imagined to be their tasks.
Suppose there is some space of strategies that you’re competent enough to think up and execute on. Inexploitability has a single unit attached, like “$”, and says that you can’t find a strategy in this space that knowably gets you much more of the resource in question than other agents in a large ecosystem of competing agents genuinely trying to get the resource in question, with access to strategies at least as good (for acquiring that resource) as the best options in your strategy space.
…
Inadequacy with respect to a strategy space has two units attached, like “QALYs / $,” and says that there is some set of strategies a large ecosystem of agents could pursue that would convert the denominator unit into the numerator unit at some desired rate, but the agents are pursuing strategies that in fact result in a lower conversion rate despite many of the agents in the ecosystem preferring that the conversion occur at the rate in question, because there’s some systemic blockage preventing this from happening.
In other words:
- Cases where the decision lies in the hands of people who would gain little personally, or lose out personally, if they did what was necessary to help someone else;
- Cases where decision-makers can’t reliably learn the information they need to make decisions, even though someone else has that information; and
- Systems that are broken in multiple places so that no one actor can make them better, even though, in principle, some magically coordinated action could move to a new stable state.
another way of thinking about the central question of civilizational inadequacy is that we’re trying to assess the quantity of effort required to achieve a given level of outperformance. Not “Can it be done?” but “How much work?”
This brings me to the single most obvious notion that correct contrarians grasp, and that people who have vastly overestimated their own competence don’t realize: It takes far less work to identify the correct expert in a pre-existing dispute between experts, than to make an original contribution to any field that is remotely healthy. … in real life, inside a civilization that is often tremendously broken on a systemic level, finding a contrarian expert seeming to shine against an untrustworthy background is nowhere remotely near as difficult as becoming that expert yourself. It’s the difference between picking which of four runners is most likely to win a fifty-kilometer race, and winning a fifty-kilometer race yourself.
Going beyond picking the right horse in the race and becoming a horse yourself, inventing your own new personal solution to a civilizational problem, requires a much greater investment of effort.
…
reaching the true frontier requires picking your battles. To win, choose winnable battles; await the rare anomalous case of, “Oh wait, that could work.”
The thesis that needs to be contrasted with modesty is not the assertion that everyone can beat their civilization all the time. It’s not that we should be the sort of person who sees the world as mad and pursues the strategy of believing a hot stock tip and investing everything.
It’s just that it’s okay to reason about the particulars of where civilization might be inadequate … It’s okay to act on a model of what you think the rest of the world is good at, and for this model to be sensitive to the specifics of different cases.
Why might this not be okay? It could be that “acting on a model” is suspect, at least when it comes to complicated macrophenomena.
…
It really looks to me like the modest reactions to certain types of overconfidence or error are taken by many believers in modesty to mean, in practice, that theories just get you into trouble; that you can either make predictions or look at reality, but not both.
In situations that are drawn from a barrel of causally similar situations, where human optimism runs rampant and unforeseen troubles are common, the outside view beats the inside view. But in novel situations where causal mechanisms differ, the outside view fails—there may not be relevantly similar cases, or it may be ambiguous which similar-looking cases are the right ones to look at.
Intellectual progress on the whole has usually been the process of moving from surface-level resemblances to more technical understandings of particulars. Extreme examples of this are common in science and engineering: the deep causal models of the world that allowed humans to plot the trajectory of the first moon rocket before launch, for example, or that allow us to verify that a computer chip will work before it’s ever manufactured.
Developing accurate beliefs requires both observation of the data and the development of models and theories that can be tested by the data.
“If you never fail, you’re only trying things that are too easy and playing far below your level. If you can’t remember any time in the last six months when you failed, you aren’t trying to do difficult enough things.”
I call this “anxious underconfidence,” … many people’s emotional makeup is such that they experience what I would consider an excess fear—a fear disproportionate to the non-emotional consequences—of trying something and failing.
…
If you only try the things that are allowed for your “reference class,” you’re supposed to be safe—in a certain social sense. You may fail, but you can justify the attempt to others by noting that many others have succeeded on similar tasks. On the other hand, if you try something more ambitious, you could fail and have everyone think you were stupid to try.
Modesty and immodesty are bad heuristics because even where they’re correcting for a real problem, you’re liable to overcorrect.
Better, I think, to not worry quite so much about how lowly or impressive you are. Better to meditate on the details of what you can do, what there is to be done, and how one might do it.