Our enemies are human: that’s why we want to kill them

Source: Our enemies are human: that’s why we want to kill them | Aeon Ideas

the failure to recognise someone’s humanity predicts indifference toward their welfare, not an active desire and delight in bringing about their suffering. To understand the active desire to cause pain and suffering in another person, we have to look to a counterintuitive source: human morality.

dehumanisation allows us to commit instrumental violence, wherein people do not desire to harm victims, but knowingly harm them anyway in order to achieve some other objective (imagine shooting a stranger in order to steal his wallet). However, dehumanisation does not cause us to commit moral violence, where people actively desire to harm victims who deserve it (imagine shooting your cheating spouse). We find that moral violence emerges only when perpetrators see victims as capable of thinking, experiencing sensations and having moral emotions. In other words, when perpetrators perceive their victims as human.

How Orwell used wartime rationing to argue for global justice

Source: How Orwell used wartime rationing to argue for global justice | Aeon Ideas

At the level of the planet as a whole, Londoners and New Yorkers and Sydneysiders who proclaim ‘We are the 99 per cent’ are in fact much more likely to belong if not to the 1 per cent, then certainly to the top 10 per cent. … As the economist Branko Milanovic has been insisting for decades, inequality within nations, bad as it is, pales in comparison with inequality between nations. Yet even those of us who find global inequality troubling and ultimately indefensible hesitate to raise the subject. … George Orwell did … Orwell recognised that, at a global scale, underpaid and downtrodden English workers were exploiters.

His job was to mobilise support for Britain’s anti-Nazi war effort, and to get that support from the victims of British colonialism. … he talked about rationing: in particular, about the popularity of rationing among the English. … it seems most likely that he did so because he knew it was something India needed to hear. There could be no anti-fascist solidarity unless the exploited Indians could believe that a more just distribution of the world’s resources was possible

The impossibility of intelligence explosion – François Chollet – Medium

Source: The impossibility of intelligence explosion – François Chollet – Medium

We are, after all, on a planet that is literally packed with intelligent systems (including us) and self-improving systems, so we can simply observe them and learn from them to answer the questions at hand

recognize that intelligence is necessarily part of a broader system … A brain is just a piece of biological tissue, there is nothing intrinsically intelligent about it. Beyond your brain, your body and senses — your sensorimotor affordances — are a fundamental part of your mind. Your environment is a fundamental part of your mind. Human culture is a fundamental part of your mind. These are, after all, where all of your thoughts come from. You cannot dissociate intelligence from the context in which it expresses itself.

Why would the real-world utility of raw cognitive ability stall past a certain threshold? This points to a very intuitive fact: that high attainment requires sufficient cognitive ability, but that the current bottleneck to problem-solving, to expressed intelligence, is not latent cognitive ability itself. The bottleneck is our circumstances.

“I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops.”

– Stephen Jay Gould

our biological brains are just a small part of our whole intelligence. Cognitive prosthetics surround us, plugging into our brain and extending its problem-solving capabilities. Your smartphone. Your laptop. Google search. The cognitive tools your were gifted in school. Books. Other people. Mathematical notation. Programing. … These things are not merely knowledge to be fed to the brain and used by it, they are literally external cognitive processes, non-biological ways to run threads of thought and problem-solving algorithms — across time, space, and importantly, across individuality.

When a scientist makes a breakthrough, the thought processes they are running in their brain are just a small part of the equation … they are only able to succeed because they are standing on the shoulder of giants — their own work is but one last subroutine in a problem-solving process that spans decades and thousands of individuals.

It is civilization as a whole that will create superhuman AI, not you, nor me, nor any individual. … In this case, you may ask, isn’t civilization itself the runaway self-improving brain? Is our civilizational intelligence exploding? No.

even if one part of a system has the ability to recursively self-improve, other parts of the system will inevitably start acting as bottlenecks. Antagonistic processes will arise in response to recursive self-improvement and squash it … Exponential progress, meet exponential friction.

science, as a problem-solving system, is very close to being a runaway superhuman AI. Science is, of course, a recursively self-improving system, because scientific progress results in the development of tools that empower science … Yet, modern scientific progress is measurably linear. … What bottlenecks and adversarial counter-reactions are slowing down recursive self-improvement in science? So many, I can’t even count them. …

  • Doing science in a given field gets exponentially harder over time …
  • Sharing and cooperation between researchers gets exponentially more difficult as a field grows larger. …
  • As scientific knowledge expands, the time and effort that have to be invested in education and training grows, and the field of inquiry of individual researchers gets increasingly narrow.

In practice, system bottlenecks, diminishing returns, and adversarial reactions end up squashing recursive self-improvement in all of the recursive processes that surround us. Self-improvement does indeed lead to progress, but that progress tends to be linear, or at best, sigmoidal.

LARGE x RARE == DIFFERENT: Why scaling companies is harder than it looks

Source: LARGE x RARE == DIFFERENT: Why scaling companies is harder than it looks

The insight is that scale causes rare events to become common.

Sure, without automated monitoring we’d be blind, and without automated problem-solving we’d be overwhelmed. So yes, “automate everything.”

But some things you can’t automate. … You can’t “automate” the recruiting, training, rapport, culture, and downright caring of teams of human beings who are awake 24/7/365, with skills ranging from multi-tasking on support chat to communicating clearly and professionally over the phone to logging into servers and identifying and fixing issues as fast as (humanly?) possible.

And you can’t “automate” away the rare things, even the technical ones. By their nature they’re difficult to define, hence difficult to monitor, and difficult to repair without the forensic skills of a human engineer.

with high growth, the surprise appears quickly

Brittleness comes from “One Thing”

Source: Brittleness comes from “One Thing”, by Jason Cohen

If there is only one of something in a system, and the loss of that one thing would break the system, then that “one thing” is a source of brittleness for the system.

The obvious solution, although expensive, is to duplicate One Things in order to acquire robustness.

As you scale, the size of the “chunks” that create brittleness also scale