Dude, you broke the future! – Charlie’s Diary

Source: Dude, you broke the future! – Charlie’s Diary, by Charlie Stross

This is the text of my keynote speech at the 34th Chaos Communication Congress in Leipzig, December 2017.

(You can also watch it on YouTube, but it runs to about 45 minutes.)

My recipe for fiction set ten years in the future used to be 90% already-here, 9% not-here-yet but predictable, and 1% who-ordered-that. But unfortunately the ratios have changed. I think we’re now down to maybe 80% already-here—climate change takes a huge toll on infrastructure—then 15% not-here-yet but predictable, and a whopping 5% of utterly unpredictable deep craziness.

Old, slow AI … Corporations

The problem with corporations is that despite their overt goals—whether they make electric vehicles or beer or sell life insurance policies—they are all subject to instrumental convergence insofar as they all have a common implicit paperclip-maximizer goal: to generate revenue. If they don’t make money, they are eaten by a bigger predator or they go bust. Making money is an instrumental goal—it’s as vital to them as breathing is for us mammals, and without pursuing it they will fail to achieve their final goal, whatever it may be.

It seems to me that our current political upheavals are best understood as arising from the capture of post-1917 democratic institutions by large-scale AIs. … Our major political parties are led by people who are compatible with the system as it exists—a system that has been shaped over decades by corporations distorting our government and regulatory environments. We humans are living in a world shaped by the desires and needs of AIs, forced to live on their terms, and we are taught that we are valuable only insofar as we contribute to the rule of the machines.

If we look at our historical very slow AIs, what lessons can we learn from them about modern AI—the flash flood of unprecedented deep learning and big data technologies that have overtaken us in the past decade?

plenty of technologies have, historically, been heavily regulated or even criminalized for good reason … Let me give you four examples—of new types of AI applications—that are going to warp our societies even worse than the old slow AIs of yore have done. This isn’t an exhaustive list: these are just examples. We need to work out a general strategy for getting on top of this sort of AI before they get on top of us.

Political hacking tools: social graph-directed propaganda … They identified individuals vulnerable to persuasion who lived in electorally sensitive districts, and canvas them with propaganda that targeted their personal hot-button issues.

the use of neural network generated false video media … This stuff is still geek-intensive and requires relatively expensive GPUs. But in less than a decade it’ll be out in the wild, and just about anyone will be able to fake up a realistic-looking video of someone they don’t like doing something horrible. … The smart money says that by 2027 you won’t be able to believe anything you see in video unless there are cryptographic signatures on it, linking it back to the device that shot the raw feed—and you know how good most people are at using encryption? The dumb money is on total chaos.

Thanks to deep learning, neuroscientists have mechanised the process of making apps more addictive. … true deep learning driven addictiveness maximizers can optimize for multiple attractors simultaneously. Now, Dopamine Labs seem, going by their public face, to have ethical qualms about the misuse of addiction maximizers in software. But neuroscience isn’t a secret, and sooner or later some really unscrupulous people will try to see how far they can push it.

Unfortunately there are even nastier uses than scraping social media to find potential victims for serial rapists. Does your social media profile indicate your political or religious affiliation? Nope? Don’t worry, Cambridge Analytica can work them out with 99.9% precision just by scanning the tweets and Facebook comments you liked. Add a service that can identify peoples affiliation and location, and you have the beginning of a flash mob app: one that will show you people like Us and people like Them on a hyper-local map.

The High Frontier, Redux – Feasibility or Futility of Space Colonization

Source: The High Frontier, Redux – Charlie’s Diary, by Charlie Stross

This is not to say that interstellar travel is impossible; quite the contrary. But to do so effectively you need either (a) outrageous amounts of cheap energy, or (b) highly efficient robot probes, or (c) a magic wand.

What about our own solar system? … Colonize the Gobi desert, colonise the North Atlantic in winter — then get back to me about the rest of the solar system!

What happens next: Explore the future of money, food, facts, home, and work — Quartz

Source: What happens next: Explore the future of money, food, facts, home, and work — Quartz

In our new series, What Happens Next, we talked to the people living the future to see what it might look like.

The future has a history. And the stories we tell about incoming change—the stories we’ve always told about such changes—fall into consistent patterns. Dator gained some of his stature in future studies with his famous observation that predictions about the future—whether they’re coming from a corporate spreadsheet, a church pulpit or Hollywood—all boil down to roughly four scenarios. Growth that keeps going. Transformation upending the past. Collapse of the present order. And discipline imposed, in some cases, to hold such collapse at bay.

Understanding these patterns helps drive home the idea that the future is multiple. Living as if there’s only one way things are going to turn out isn’t terribly resilient when events take off in a shocking direction.

If our only images for the future are victory or doom, the underlying message for regular people seems to be, “There’s nothing you can do.”

We need more useful ways to consider and prepare for what happens next.

In Favor Of Futurism Being About The Future | Slate Star Codex

Source: In Favor Of Futurism Being About The Future | Slate Star Codex

The Singularity is already here, it’s just unevenly distributed across various scales of x-axis

This is what everyone in whatever school or quadrant of futurism you care to name is thinking about.

I don’t know whether the future will be better or worse than the past, but I feel pretty sure it will be grander. Either we will perish in nuclear apocalypse or manage to avert nuclear apocalypse; either one will be history’s greatest story. Either we will discover intelligent alien life or find ourselves alone in the universe; either way would be terrifying. Either we will suppress AI research with a ferocity that puts the Inquisition to shame, or we will turn into gods creating life in our own image; either way the future will be not quite human.