Dude, you broke the future! – Charlie’s Diary

Source: Dude, you broke the future! – Charlie’s Diary, by Charlie Stross

This is the text of my keynote speech at the 34th Chaos Communication Congress in Leipzig, December 2017.

(You can also watch it on YouTube, but it runs to about 45 minutes.)

My recipe for fiction set ten years in the future used to be 90% already-here, 9% not-here-yet but predictable, and 1% who-ordered-that. But unfortunately the ratios have changed. I think we’re now down to maybe 80% already-here—climate change takes a huge toll on infrastructure—then 15% not-here-yet but predictable, and a whopping 5% of utterly unpredictable deep craziness.

Old, slow AI … Corporations

The problem with corporations is that despite their overt goals—whether they make electric vehicles or beer or sell life insurance policies—they are all subject to instrumental convergence insofar as they all have a common implicit paperclip-maximizer goal: to generate revenue. If they don’t make money, they are eaten by a bigger predator or they go bust. Making money is an instrumental goal—it’s as vital to them as breathing is for us mammals, and without pursuing it they will fail to achieve their final goal, whatever it may be.

It seems to me that our current political upheavals are best understood as arising from the capture of post-1917 democratic institutions by large-scale AIs. … Our major political parties are led by people who are compatible with the system as it exists—a system that has been shaped over decades by corporations distorting our government and regulatory environments. We humans are living in a world shaped by the desires and needs of AIs, forced to live on their terms, and we are taught that we are valuable only insofar as we contribute to the rule of the machines.

If we look at our historical very slow AIs, what lessons can we learn from them about modern AI—the flash flood of unprecedented deep learning and big data technologies that have overtaken us in the past decade?

plenty of technologies have, historically, been heavily regulated or even criminalized for good reason … Let me give you four examples—of new types of AI applications—that are going to warp our societies even worse than the old slow AIs of yore have done. This isn’t an exhaustive list: these are just examples. We need to work out a general strategy for getting on top of this sort of AI before they get on top of us.

Political hacking tools: social graph-directed propaganda … They identified individuals vulnerable to persuasion who lived in electorally sensitive districts, and canvas them with propaganda that targeted their personal hot-button issues.

the use of neural network generated false video media … This stuff is still geek-intensive and requires relatively expensive GPUs. But in less than a decade it’ll be out in the wild, and just about anyone will be able to fake up a realistic-looking video of someone they don’t like doing something horrible. … The smart money says that by 2027 you won’t be able to believe anything you see in video unless there are cryptographic signatures on it, linking it back to the device that shot the raw feed—and you know how good most people are at using encryption? The dumb money is on total chaos.

Thanks to deep learning, neuroscientists have mechanised the process of making apps more addictive. … true deep learning driven addictiveness maximizers can optimize for multiple attractors simultaneously. Now, Dopamine Labs seem, going by their public face, to have ethical qualms about the misuse of addiction maximizers in software. But neuroscience isn’t a secret, and sooner or later some really unscrupulous people will try to see how far they can push it.

Unfortunately there are even nastier uses than scraping social media to find potential victims for serial rapists. Does your social media profile indicate your political or religious affiliation? Nope? Don’t worry, Cambridge Analytica can work them out with 99.9% precision just by scanning the tweets and Facebook comments you liked. Add a service that can identify peoples affiliation and location, and you have the beginning of a flash mob app: one that will show you people like Us and people like Them on a hyper-local map.

The High Frontier, Redux – Feasibility or Futility of Space Colonization

Source: The High Frontier, Redux – Charlie’s Diary, by Charlie Stross

This is not to say that interstellar travel is impossible; quite the contrary. But to do so effectively you need either (a) outrageous amounts of cheap energy, or (b) highly efficient robot probes, or (c) a magic wand.

What about our own solar system? … Colonize the Gobi desert, colonise the North Atlantic in winter — then get back to me about the rest of the solar system!

Post-apocalyptic life in American health care

Source: Post-apocalyptic life in American health care, by David Chapman

There is, in fact, no system. There are systems, but mostly they don’t talk to each other. I have to do that.

The hospital doctor on rounds said “Well, this is typical, especially with Anthem. It’s costing them several thousand dollars a day to keep her here, versus a few hundred dollars a day in a SNF, but it might take a week for them to figure out which local SNF they cover. Don’t worry, they’ll sort it out eventually.”

Hospitals can still operate modern material technologies (like an MRI) just fine. It’s social technologies that have broken down and reverted to a medieval level.

Systematic social relationships involve formally-defined roles and responsibilities. That is, “professionalism.” But across medical organizations, there are none. Who do you call at Anthem to find out if they’ll cover an out-of-state SNF stay? No one knows.

A central research topic in ethnomethodology is the relationship between formal rationality (such as an insurance company’s 1600 pages of unworkable rules) and “mere reasonableness,” which is what people mostly use to get a job done. The disjunction between electronic patient records and calling around town to try to find out who wrote a biopsy report that arrived by fax seems sufficiently extreme that it may produce a qualitatively new way of being.

Our enemies are human: that’s why we want to kill them

Source: Our enemies are human: that’s why we want to kill them | Aeon Ideas

the failure to recognise someone’s humanity predicts indifference toward their welfare, not an active desire and delight in bringing about their suffering. To understand the active desire to cause pain and suffering in another person, we have to look to a counterintuitive source: human morality.

dehumanisation allows us to commit instrumental violence, wherein people do not desire to harm victims, but knowingly harm them anyway in order to achieve some other objective (imagine shooting a stranger in order to steal his wallet). However, dehumanisation does not cause us to commit moral violence, where people actively desire to harm victims who deserve it (imagine shooting your cheating spouse). We find that moral violence emerges only when perpetrators see victims as capable of thinking, experiencing sensations and having moral emotions. In other words, when perpetrators perceive their victims as human.

How Orwell used wartime rationing to argue for global justice

Source: How Orwell used wartime rationing to argue for global justice | Aeon Ideas

At the level of the planet as a whole, Londoners and New Yorkers and Sydneysiders who proclaim ‘We are the 99 per cent’ are in fact much more likely to belong if not to the 1 per cent, then certainly to the top 10 per cent. … As the economist Branko Milanovic has been insisting for decades, inequality within nations, bad as it is, pales in comparison with inequality between nations. Yet even those of us who find global inequality troubling and ultimately indefensible hesitate to raise the subject. … George Orwell did … Orwell recognised that, at a global scale, underpaid and downtrodden English workers were exploiters.

His job was to mobilise support for Britain’s anti-Nazi war effort, and to get that support from the victims of British colonialism. … he talked about rationing: in particular, about the popularity of rationing among the English. … it seems most likely that he did so because he knew it was something India needed to hear. There could be no anti-fascist solidarity unless the exploited Indians could believe that a more just distribution of the world’s resources was possible