Discontinuous progress in history: an update | LessWrong 2.0

Source: Discontinuous progress in history: an update | LessWrong 2.0, by Katja Grace

We recently finished expanding this investigation to 37 technological trends. This blog post is a quick update on our findings. See the main page on the research and its outgoing links for more details.

We found ten events in history that abruptly and clearly contributed more to progress on some technological metric than another century would have seen on the previous trend. Or as we say, we found ten events that produced ‘large’, ‘robust’ ‘discontinuities’.

Here is a quick list of the robust 100-year discontinuous events, which I’ll describe in more detail beneath:

  • The Pyramid of Djoser, 2650BC (discontinuity in structure height trends)
  • The SS Great Eastern, 1858 (discontinuity in ship size trends)
  • The first telegraph, 1858 (discontinuity in speed of sending a 140 character message across the Atlantic Ocean)
  • The second telegraph, 1866 (discontinuity in speed of sending a 140 character message across the Atlantic Ocean)
  • The Paris Gun, 1918 (discontinuity in altitude reached by man-made means)
  • The first non-stop transatlantic flight, in a modified WWI bomber, 1919 (discontinuity in both speed of passenger travel across the Atlantic Ocean and speed of military payload travel across the Atlantic Ocean)
  • The George Washington Bridge, 1931 (discontinuity in longest bridge span)
  • The first nuclear weapons, 1945 (discontinuity in relative effectiveness of explosives)
  • The first ICBM, 1958 (discontinuity in average speed of military payload crossing the Atlantic Ocean)
  • YBa2Cu3O7 as a superconductor, 1987 (discontinuity in warmest temperature of superconduction)


It looks like discontinuities are often associated with changes in the growth rate. At a glance, 15 of the 38 trends had a relatively sharp change in their rate of progress at least once in their history. These changes in the growth rate very often coincided with discontinuities—in fourteen of the fifteen trends, at least one sharp change coincided with one of the discontinuities. If this is a real relationship, it means that if you see a discontinuity, there is a much heightened chance of further fast progress coming up. This seems important, but is a quick observation and should probably be checked and investigated further if we wanted to rely on it.

Discontinuities were not randomly distributed: some classes of metric, some times, and some types of event seem to make them more likely or more numerous. We mostly haven’t investigated these in depth.

Conflict vs. mistake in non-zero-sum games | LessWrong 2.0

Source: Conflict vs. mistake in non-zero-sum games | LessWrong 2.0, by Nisan

Summary: Whether you behave like a mistake theorist or a conflict theorist may depend more on your negotiating position in a non-zero-sum game than on your worldview.

Plot the payoffs in a non-zero-sum two-player game, and you’ll get a set with the Pareto frontier on the top and right. You can describe this set with two parameters: The surplus is how close the outcome is to the Pareto frontier, and the allocation tells you how much the outcome favors player 1 versus player 2.

It’s tempting to decompose the game into two phases: A cooperative phase, where the players coordinate to maximize surplus; and a competitive phase, where the players negotiate how the surplus is allocated.

Of course, in the usual formulation, both phases occur simultaneously. But this suggests a couple of negotiation strategies where you try to make one phase happen before the other:

  1. “Let’s agree to maximize surplus. Once we agree to that, we can talk about allocation.”
  2. “Let’s agree on an allocation. Once we do that, we can talk about maximizing surplus.”

I’m going to provocatively call the first strategy mistake theory, and the second conflict theory.

Now I don’t have a good model of negotiation. But intuitively, it seems that mistake theory is a good strategy if you think you’ll be in a better negotiating position once you move to the Pareto frontier. And conflict theory is a good strategy if you think you’ll be in a worse negotiating position at the Pareto frontier.

If you’re naturally a mistake theorist, this might make conflict theory seem more appealing. Imagine negotiating with a paperclip maximizer over the fate of billions of lives. Mutual cooperation is Pareto efficient, but unappealing. It’s more sensible to threaten defection in order to save a few more human lives, if you can get away with it.

It also makes mistake theory seem unsavory: Apparently mistake theory is about postponing the allocation negotiation until you’re in a comfortable negotiating position. (Or, somewhat better: It’s about tricking the other players into cooperating before they can extract concessions from you.)

This is kind of unfair to mistake theory, which is supposed to be about educating decision-makers on efficient policies and building institutions to enable cooperation. None of that is present in this model.

But I think it describes something important about mistake theory which is usually rounded off to something like “[mistake theorists have] become part of a class that’s more interested in protecting its own privileges than in helping the poor or working for the good of all”.

The Framing of the Developer | svese

Source: The Framing of the Developer | svese, by Stephan Schmidt

“In the social sciences, framing comprises a set of concepts and theoretical perspectives on how individuals, groups, and societies, organize, perceive, and communicate about reality.” Wikipedia

Framing is intentionally and unintentionally used in discussions and environments. A frame shapes the discussion and makes some things thinkable and some other things unthinkable. Frames have friends that go with them.

We have a dominant frame in development. Software development as a “backlog”. Features are put in a backlog by a product manager – the product owner – and by different departments. According to the Cambridge dictionary a backlog is “a large number of things that you should have done before and must do now”. The word backlog makes you think you are always behind finishing things. The frame says: Finishing means success. If we work from the backlog, we’ll have success. If we complete all the things I as a product owner have in my vision, we will have success.

Companies today need a frame of impact. In this world view success is defined by impact. Do product and technology develop products and features that have impact? Impact means impact for the company and impact for the customers. For the company this feature, or product moves the needle. For customers it changes their behavior.

The impact frame helps to focus on the important things. If success is defined by throughput and finishing a backlog, the more things you do the more successful you are – which leads to many features developed that are not important to customers. Backlog success is input driven product development. It focuses on the input. Impact development is outcome driven. It focuses on outcome. … This means we need to do things that have impact and no longer focus on things that have no impact. … Failure is when we don’t have impact. In this frame it becomes crucial to choose things that have impact and not work on things that do not have impact. It is key to throw away most of the ideas you have and only stick with the very few that will change your customer or the market. … Also throw out ideas that are already in development. You’ve put time, energy and money into something and learned that it will not have impact? Stop! Throw it out.

The illusion of certainty, by Rory Sutherland

Source: The illusion of certainty, by Rory Sutherland

at stake is the difference between deterministic and probabilistic improvement. If you engage engineers, you don’t know what you are going to get. You may be unlucky and get nothing. Or their solution may be so outlandish that it is hard to compare with other competing solutions. On average, though, what you get will be more valuable than the gains produced by some tedious restructuring enshrined in a fat PowerPoint deck.

But in business, let alone in government, it is only in crises that people find a budget for probabilistic interventions of this kind (in peacetime, nobody would have given Barnes Wallis the time of day). The reason is that both bureaucrats and business people are heavily attracted to the illusion of certainty. Standard cost-cutting ‘efficiencies’ can usually be ‘proven’ to work in advance; more interesting lines of enquiry come with career-threatening unknowability.

One problem with this pretense of certainty is that cost-savings are more easily quantified than potential gains

for a long time, the ratio between ‘explore’ and ‘exploit’ has been badly out of whack.

use [an] ‘evidence-based’ data-model up to a point, but correct for the fact that it is incomplete, temporary and weighted to the past. Institutionalised humans obtain a false sense of certainty by assuming … that what is optimal in a one-off transaction in a certain present is also optimal at scale, in an uncertain, long-term future.