Reality has a surprising amount of detail – John Salvatier

Source: Reality has a surprising amount of detail, by John Salvatier

This turns out to explain why its so easy for people to end up intellectually stuck. Even when they’re literally the best in the world in their field. … At every step and every level there’s an abundance of detail with material consequences.

You can see this everywhere if you look. For example, you’ve probably had the experience of doing something for the first time, maybe growing vegetables or using a Haskell package for the first time, and being frustrated by how many annoying snags there were. Then you got more practice and then you told yourself ‘man, it was so simple all along, I don’t know why I had so much trouble’. We run into a fundamental property of the universe and mistake it for a personal failing.

You might think the fiddly detailiness of things is limited to human centric domains, and that physics itself is simple and elegant. That’s true in some sense – the the physical laws themselves tend to be quite simple – but the manifestation of those laws is often complex and counterintuitive.

This surprising amount of detail is is not limited to “human” or “complicated” domains, it is a near universal property of everything from space travel to sewing, to your internal experience of your own mind.

you might think ‘So what? I guess things are complicated but I can just notice the details as I run into them; no need to think specifically about this’. And if you are doing things that are relatively simple, things that humanity has been doing for a long time, this is often true. But if you’re trying to do difficult things, things which are not known to be possible, it is not true.

The more difficult your mission, the more details there will be that are critical to understand for success. You might hope that these surprising details are irrelevant to your mission, but not so. Some of them will end up being key.

You might also hope that the important details will be obvious when you run into them, but not so. Such details aren’t automatically visible, even when you’re directly running up against them. Things can just seem messy and noisy instead. … Another way to see that noticing the right details is hard, is that different people end up noticing different details.

Before you’ve noticed important details they are, of course, basically invisible. It’s hard to put your attention on them because you don’t even know what you’re looking for. But after you see them they quickly become so integrated into your intuitive models of the world that they become essentially transparent. Do you remember the insights that were crucial in learning to ride a bike or drive? How about the details and insights you have that led you to be good at the things you’re good at?

This means it’s really easy to get stuck. Stuck in your current way of seeing and thinking about things. Frames are made out of the details that seem important to you. The important details you haven’t noticed are invisible to you, and the details you have noticed seem completely obvious and you see right through them. This all makes makes it difficult to imagine how you could be missing something important.

If you wish to not get stuck, seek to perceive what you have not yet perceived.

Dude, you broke the future! – Charlie’s Diary

Source: Dude, you broke the future! – Charlie’s Diary, by Charlie Stross

This is the text of my keynote speech at the 34th Chaos Communication Congress in Leipzig, December 2017.

(You can also watch it on YouTube, but it runs to about 45 minutes.)

My recipe for fiction set ten years in the future used to be 90% already-here, 9% not-here-yet but predictable, and 1% who-ordered-that. But unfortunately the ratios have changed. I think we’re now down to maybe 80% already-here—climate change takes a huge toll on infrastructure—then 15% not-here-yet but predictable, and a whopping 5% of utterly unpredictable deep craziness.

Old, slow AI … Corporations

The problem with corporations is that despite their overt goals—whether they make electric vehicles or beer or sell life insurance policies—they are all subject to instrumental convergence insofar as they all have a common implicit paperclip-maximizer goal: to generate revenue. If they don’t make money, they are eaten by a bigger predator or they go bust. Making money is an instrumental goal—it’s as vital to them as breathing is for us mammals, and without pursuing it they will fail to achieve their final goal, whatever it may be.

It seems to me that our current political upheavals are best understood as arising from the capture of post-1917 democratic institutions by large-scale AIs. … Our major political parties are led by people who are compatible with the system as it exists—a system that has been shaped over decades by corporations distorting our government and regulatory environments. We humans are living in a world shaped by the desires and needs of AIs, forced to live on their terms, and we are taught that we are valuable only insofar as we contribute to the rule of the machines.

If we look at our historical very slow AIs, what lessons can we learn from them about modern AI—the flash flood of unprecedented deep learning and big data technologies that have overtaken us in the past decade?

plenty of technologies have, historically, been heavily regulated or even criminalized for good reason … Let me give you four examples—of new types of AI applications—that are going to warp our societies even worse than the old slow AIs of yore have done. This isn’t an exhaustive list: these are just examples. We need to work out a general strategy for getting on top of this sort of AI before they get on top of us.

Political hacking tools: social graph-directed propaganda … They identified individuals vulnerable to persuasion who lived in electorally sensitive districts, and canvas them with propaganda that targeted their personal hot-button issues.

the use of neural network generated false video media … This stuff is still geek-intensive and requires relatively expensive GPUs. But in less than a decade it’ll be out in the wild, and just about anyone will be able to fake up a realistic-looking video of someone they don’t like doing something horrible. … The smart money says that by 2027 you won’t be able to believe anything you see in video unless there are cryptographic signatures on it, linking it back to the device that shot the raw feed—and you know how good most people are at using encryption? The dumb money is on total chaos.

Thanks to deep learning, neuroscientists have mechanised the process of making apps more addictive. … true deep learning driven addictiveness maximizers can optimize for multiple attractors simultaneously. Now, Dopamine Labs seem, going by their public face, to have ethical qualms about the misuse of addiction maximizers in software. But neuroscience isn’t a secret, and sooner or later some really unscrupulous people will try to see how far they can push it.

Unfortunately there are even nastier uses than scraping social media to find potential victims for serial rapists. Does your social media profile indicate your political or religious affiliation? Nope? Don’t worry, Cambridge Analytica can work them out with 99.9% precision just by scanning the tweets and Facebook comments you liked. Add a service that can identify peoples affiliation and location, and you have the beginning of a flash mob app: one that will show you people like Us and people like Them on a hyper-local map.

The High Frontier, Redux – Feasibility or Futility of Space Colonization

Source: The High Frontier, Redux – Charlie’s Diary, by Charlie Stross

This is not to say that interstellar travel is impossible; quite the contrary. But to do so effectively you need either (a) outrageous amounts of cheap energy, or (b) highly efficient robot probes, or (c) a magic wand.

What about our own solar system? … Colonize the Gobi desert, colonise the North Atlantic in winter — then get back to me about the rest of the solar system!

Post-apocalyptic life in American health care

Source: Post-apocalyptic life in American health care, by David Chapman

There is, in fact, no system. There are systems, but mostly they don’t talk to each other. I have to do that.

The hospital doctor on rounds said “Well, this is typical, especially with Anthem. It’s costing them several thousand dollars a day to keep her here, versus a few hundred dollars a day in a SNF, but it might take a week for them to figure out which local SNF they cover. Don’t worry, they’ll sort it out eventually.”

Hospitals can still operate modern material technologies (like an MRI) just fine. It’s social technologies that have broken down and reverted to a medieval level.

Systematic social relationships involve formally-defined roles and responsibilities. That is, “professionalism.” But across medical organizations, there are none. Who do you call at Anthem to find out if they’ll cover an out-of-state SNF stay? No one knows.

A central research topic in ethnomethodology is the relationship between formal rationality (such as an insurance company’s 1600 pages of unworkable rules) and “mere reasonableness,” which is what people mostly use to get a job done. The disjunction between electronic patient records and calling around town to try to find out who wrote a biopsy report that arrived by fax seems sufficiently extreme that it may produce a qualitatively new way of being.