Is State Protection a Threat to Liberal Democracy? | Quillette

Source: Is State Protection a Threat to Liberal Democracy? | Quillette, by Ross Stitt

If the development of liberal democracy over the centuries has been a story of citizens making demands on the state—for personal safety, freedom, political power, welfare—what does our new age of insecurity mean for the next chapter? What will citizens want more of from their governments going forward? The obvious answer is protection—protection from terrorists, pandemics, extreme climatic events, economic hardship, and war. And today’s high maintenance citizens, products of a culture of market consumerism, will not be backward in demanding that protection.

In order to provide physical and economic protection to its citizens, a liberal democratic state must transgress core liberal tenets like privacy, freedom, and respect for private property. … Democratic politics is the process by which these trade-offs are negotiated. They have evolved over time.

The crucial question is whether or not the dramatic change in the nature and level of threats over the last 20 years, capped off by the shock of the dual COVID-19 crises, will trigger a revolutionary shift in those trade-offs and a transformation of the citizen/state relationship.

Why Tacit Knowledge is More Important Than Deliberate Practice | Commonplace Blog

Source: Why Tacit Knowledge is More Important Than Deliberate Practice | Commonplace Blog, by Cedric Chin

Tacit knowledge is knowledge that cannot be captured through words alone. … tacit knowledge instruction happens through things like imitation, emulation, and apprenticeship. You learn by copying what the master does, blindly, until you internalise the principles behind the actions.

If you are a knowledge worker, tacit knowledge is a lot more important to the development of your field of expertise than you might think.

I don’t mean to say that Hieu or the senior software engineer couldn’t explain their judgment, or that they couldn’t make explicit the principles they used to evaluate the tradeoffs between a dozen or so variables: they could. My point is that their explanations would not lead me to the same ability that they had.

Why is this the case? Well, take a look at the conversation again. When I pushed these people on their judgments, they would try to explain in terms of principles or heuristics. But the more I pushed, the more exceptions and caveats and potential gotchas I unearthed.

Could it — in principle — be possible to externalise tacit knowledge into a list of instructions? … The consensus answer to that question seems to be: “Yes, in principle it is possible to do so. In practice it is very difficult.” My take on this is that it is so difficult that we shouldn’t even bother

Discontinuous progress in history: an update | LessWrong 2.0

Source: Discontinuous progress in history: an update | LessWrong 2.0, by Katja Grace

We recently finished expanding this investigation to 37 technological trends. This blog post is a quick update on our findings. See the main page on the research and its outgoing links for more details.

We found ten events in history that abruptly and clearly contributed more to progress on some technological metric than another century would have seen on the previous trend. Or as we say, we found ten events that produced ‘large’, ‘robust’ ‘discontinuities’.

Here is a quick list of the robust 100-year discontinuous events, which I’ll describe in more detail beneath:

  • The Pyramid of Djoser, 2650BC (discontinuity in structure height trends)
  • The SS Great Eastern, 1858 (discontinuity in ship size trends)
  • The first telegraph, 1858 (discontinuity in speed of sending a 140 character message across the Atlantic Ocean)
  • The second telegraph, 1866 (discontinuity in speed of sending a 140 character message across the Atlantic Ocean)
  • The Paris Gun, 1918 (discontinuity in altitude reached by man-made means)
  • The first non-stop transatlantic flight, in a modified WWI bomber, 1919 (discontinuity in both speed of passenger travel across the Atlantic Ocean and speed of military payload travel across the Atlantic Ocean)
  • The George Washington Bridge, 1931 (discontinuity in longest bridge span)
  • The first nuclear weapons, 1945 (discontinuity in relative effectiveness of explosives)
  • The first ICBM, 1958 (discontinuity in average speed of military payload crossing the Atlantic Ocean)
  • YBa2Cu3O7 as a superconductor, 1987 (discontinuity in warmest temperature of superconduction)


It looks like discontinuities are often associated with changes in the growth rate. At a glance, 15 of the 38 trends had a relatively sharp change in their rate of progress at least once in their history. These changes in the growth rate very often coincided with discontinuities—in fourteen of the fifteen trends, at least one sharp change coincided with one of the discontinuities. If this is a real relationship, it means that if you see a discontinuity, there is a much heightened chance of further fast progress coming up. This seems important, but is a quick observation and should probably be checked and investigated further if we wanted to rely on it.

Discontinuities were not randomly distributed: some classes of metric, some times, and some types of event seem to make them more likely or more numerous. We mostly haven’t investigated these in depth.

Conflict vs. mistake in non-zero-sum games | LessWrong 2.0

Source: Conflict vs. mistake in non-zero-sum games | LessWrong 2.0, by Nisan

Summary: Whether you behave like a mistake theorist or a conflict theorist may depend more on your negotiating position in a non-zero-sum game than on your worldview.

Plot the payoffs in a non-zero-sum two-player game, and you’ll get a set with the Pareto frontier on the top and right. You can describe this set with two parameters: The surplus is how close the outcome is to the Pareto frontier, and the allocation tells you how much the outcome favors player 1 versus player 2.

It’s tempting to decompose the game into two phases: A cooperative phase, where the players coordinate to maximize surplus; and a competitive phase, where the players negotiate how the surplus is allocated.

Of course, in the usual formulation, both phases occur simultaneously. But this suggests a couple of negotiation strategies where you try to make one phase happen before the other:

  1. “Let’s agree to maximize surplus. Once we agree to that, we can talk about allocation.”
  2. “Let’s agree on an allocation. Once we do that, we can talk about maximizing surplus.”

I’m going to provocatively call the first strategy mistake theory, and the second conflict theory.

Now I don’t have a good model of negotiation. But intuitively, it seems that mistake theory is a good strategy if you think you’ll be in a better negotiating position once you move to the Pareto frontier. And conflict theory is a good strategy if you think you’ll be in a worse negotiating position at the Pareto frontier.

If you’re naturally a mistake theorist, this might make conflict theory seem more appealing. Imagine negotiating with a paperclip maximizer over the fate of billions of lives. Mutual cooperation is Pareto efficient, but unappealing. It’s more sensible to threaten defection in order to save a few more human lives, if you can get away with it.

It also makes mistake theory seem unsavory: Apparently mistake theory is about postponing the allocation negotiation until you’re in a comfortable negotiating position. (Or, somewhat better: It’s about tricking the other players into cooperating before they can extract concessions from you.)

This is kind of unfair to mistake theory, which is supposed to be about educating decision-makers on efficient policies and building institutions to enable cooperation. None of that is present in this model.

But I think it describes something important about mistake theory which is usually rounded off to something like “[mistake theorists have] become part of a class that’s more interested in protecting its own privileges than in helping the poor or working for the good of all”.

The Framing of the Developer | svese

Source: The Framing of the Developer | svese, by Stephan Schmidt

“In the social sciences, framing comprises a set of concepts and theoretical perspectives on how individuals, groups, and societies, organize, perceive, and communicate about reality.” Wikipedia

Framing is intentionally and unintentionally used in discussions and environments. A frame shapes the discussion and makes some things thinkable and some other things unthinkable. Frames have friends that go with them.

We have a dominant frame in development. Software development as a “backlog”. Features are put in a backlog by a product manager – the product owner – and by different departments. According to the Cambridge dictionary a backlog is “a large number of things that you should have done before and must do now”. The word backlog makes you think you are always behind finishing things. The frame says: Finishing means success. If we work from the backlog, we’ll have success. If we complete all the things I as a product owner have in my vision, we will have success.

Companies today need a frame of impact. In this world view success is defined by impact. Do product and technology develop products and features that have impact? Impact means impact for the company and impact for the customers. For the company this feature, or product moves the needle. For customers it changes their behavior.

The impact frame helps to focus on the important things. If success is defined by throughput and finishing a backlog, the more things you do the more successful you are – which leads to many features developed that are not important to customers. Backlog success is input driven product development. It focuses on the input. Impact development is outcome driven. It focuses on outcome. … This means we need to do things that have impact and no longer focus on things that have no impact. … Failure is when we don’t have impact. In this frame it becomes crucial to choose things that have impact and not work on things that do not have impact. It is key to throw away most of the ideas you have and only stick with the very few that will change your customer or the market. … Also throw out ideas that are already in development. You’ve put time, energy and money into something and learned that it will not have impact? Stop! Throw it out.