A Viral Game About Paperclips Teaches You to Be a World-Killing AI | WIRED

Source: A Viral Game About Paperclips Teaches You to Be a World-Killing AI | WIRED, by Adam Rogers

RE: Universal Paperclips, by Frank Lantz (director of the New York University Games Center), Everybody House Games

RE: Paperclip maximizer, described by Nick Bostrom

Paperclips is a simple clicker game that manages to turn you into an artificial intelligence run amok.

“The idea isn’t that a paperclip factory is likely to have the most advanced research AI in the world. The idea is to express the orthogonality thesis, which is that you can have arbitrarily great intelligence hooked up to any goal,” Yudkowsky says.

in a more literary sense, you play the AI because you must. Gaming, Lantz had realized, embodies the orthogonality thesis. When you enter a gameworld, you are a superintelligence aimed at a goal that is, by definition, kind of prosaic.

“When you play a game—really any game, but especially a game that is addictive and that you find yourself pulled into—it really does give you direct, first-hand experience of what it means to be fully compelled by an arbitrary goal,” Lantz says. Games don’t have a why, really. Why do you catch the ball? Why do want to surround the king, or box in your opponent’s counters? What’s so great about Candyland that you have to get there first? Nothing. It’s just the rules.

Where The Falling Einstein Meets The Rising Mouse | Slate Star Codex

Source: Where The Falling Einstein Meets The Rising Mouse | Slate Star Codex

versus

we naturally think there’s a pretty big intellectual difference between mice and chimps, and a pretty big intellectual difference between normal people and Einstein, and implicitly treat these as about equal in degree. But in any objective terms we choose – amount of evolutionary work it took to generate the difference, number of neurons, measurable difference in brain structure, performance on various tasks, etc – the gap between mice and chimps is immense, and the difference between an average Joe and Einstein trivial in comparison.

But Katja Grace takes a broader perspective and finds the opposite.
… So how can one reconcile the common-sense force of Eliezer’s argument with the empirical force of Katja’s contrary data?

How does this relate to our original concern – how fast we expect AI to progress?

There are no free lunches, but organic lunches are super expensive: Why the tradeoffs constraining human cognition do not limit artificial superintelligences | Hypermagical Ultraomnipotence

Source: There are no free lunches, but organic lunches are super expensive: Why the tradeoffs constraining human cognition do not limit artificial superintelligences | Hypermagical Ultraomnipotence, by Anni Leskela

In this post, I argue against the brand of AI risk skepticism that is based on what we know about organic, biologically evolved intelligence and its constraints, recently promoted by Kevin Kelly on Wired and expanded by Erik Hoel in his blog.

below, “cognition” usually just refers to the skillsets related to predicting and influencing our actual world

If value alignment fails, we don’t know how competent an inhuman AI needs to be to reach existentially threatening powers

the [intelligence] growth rate doesn’t need to be literally exponential to pose an existential risk – with or without intentional treachery, we will still not be able to comprehend what’s going on after a while of recursive improvement, and roughly linear or irregular growth could still get faster than what we can keep track of. And … the eventual results could look rather explosive

a superintelligence doesn’t need to do human-style thinking to be dangerous

There are eventual constraints for intelligences implemented in silicon too, but it seems to me that these are unlikely to apply before they’re way ahead of us, because the materials and especially the algorithms and directions of a developing superintelligence are intentionally chosen and optimized for useful cognition, not for replicating in the primordial soup and proliferating in the organic world with weird restrictions such as metabolism and pathogens and communities of similar brains you need to cooperate with to get anything done.

The limitations of deep learning

Source: The limitations of deep learning

In general, anything that requires reasoning—like programming, or applying the scientific method—long-term planning, and algorithmic-like data manipulation, is out of reach for deep learning models, no matter how much data you throw at them.

This is because a deep learning model is “just” a chain of simple, continuous geometric transformations mapping one vector space into another. All it can do is map one data manifold X into another manifold Y, assuming the existence of a learnable continuous transform from X to Y, and the availability of a dense sampling of X:Y to use as training data. So even though a deep learning model can be interpreted as a kind of program, inversely most programs cannot be expressed as deep learning models—for most tasks, either there exists no corresponding practically-sized deep neural network that solves the task, or even if there exists one, it may not be learnable, i.e. the corresponding geometric transform may be far too complex, or there may not be appropriate data available to learn it.