The impossibility of intelligence explosion – François Chollet – Medium

Source: The impossibility of intelligence explosion – François Chollet – Medium

We are, after all, on a planet that is literally packed with intelligent systems (including us) and self-improving systems, so we can simply observe them and learn from them to answer the questions at hand

recognize that intelligence is necessarily part of a broader system … A brain is just a piece of biological tissue, there is nothing intrinsically intelligent about it. Beyond your brain, your body and senses — your sensorimotor affordances — are a fundamental part of your mind. Your environment is a fundamental part of your mind. Human culture is a fundamental part of your mind. These are, after all, where all of your thoughts come from. You cannot dissociate intelligence from the context in which it expresses itself.

Why would the real-world utility of raw cognitive ability stall past a certain threshold? This points to a very intuitive fact: that high attainment requires sufficient cognitive ability, but that the current bottleneck to problem-solving, to expressed intelligence, is not latent cognitive ability itself. The bottleneck is our circumstances.

“I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops.”

– Stephen Jay Gould

our biological brains are just a small part of our whole intelligence. Cognitive prosthetics surround us, plugging into our brain and extending its problem-solving capabilities. Your smartphone. Your laptop. Google search. The cognitive tools your were gifted in school. Books. Other people. Mathematical notation. Programing. … These things are not merely knowledge to be fed to the brain and used by it, they are literally external cognitive processes, non-biological ways to run threads of thought and problem-solving algorithms — across time, space, and importantly, across individuality.

When a scientist makes a breakthrough, the thought processes they are running in their brain are just a small part of the equation … they are only able to succeed because they are standing on the shoulder of giants — their own work is but one last subroutine in a problem-solving process that spans decades and thousands of individuals.

It is civilization as a whole that will create superhuman AI, not you, nor me, nor any individual. … In this case, you may ask, isn’t civilization itself the runaway self-improving brain? Is our civilizational intelligence exploding? No.

even if one part of a system has the ability to recursively self-improve, other parts of the system will inevitably start acting as bottlenecks. Antagonistic processes will arise in response to recursive self-improvement and squash it … Exponential progress, meet exponential friction.

science, as a problem-solving system, is very close to being a runaway superhuman AI. Science is, of course, a recursively self-improving system, because scientific progress results in the development of tools that empower science … Yet, modern scientific progress is measurably linear. … What bottlenecks and adversarial counter-reactions are slowing down recursive self-improvement in science? So many, I can’t even count them. …

  • Doing science in a given field gets exponentially harder over time …
  • Sharing and cooperation between researchers gets exponentially more difficult as a field grows larger. …
  • As scientific knowledge expands, the time and effort that have to be invested in education and training grows, and the field of inquiry of individual researchers gets increasingly narrow.

In practice, system bottlenecks, diminishing returns, and adversarial reactions end up squashing recursive self-improvement in all of the recursive processes that surround us. Self-improvement does indeed lead to progress, but that progress tends to be linear, or at best, sigmoidal.

OpenAI makes humanity less safe

Source: OpenAI makes humanity less safe | Compass Rose, by Benjamin R. Hoffman

If OpenAI is working on things that are directly relevant to the creation of a superintelligence, then its very existence makes an arms race with DeepMind more likely. This is really bad! Moreover, sharing results openly makes it easier for other institutions or individuals, who may care less about safety, to make progress on building a superintelligence.

Suppose OpenAI and DeepMind are largely not working on problems highly relevant to superintelligence. (Personally I consider this the more likely scenario.) By portraying short-run AI capacity work as a way to get to safe superintelligence, OpenAI’s existence diverts attention and resources from things actually focused on the problem of superintelligence value alignment, such as MIRI or FHI.

A state Supreme Court justice’s open letter to AI

Source: A state Supreme Court justice’s open letter to AI

Once we overcome some technical problems…we’re in for more than just a world of change and evolution. We’re in for some discussion of what it means to be human.

Consider a world of relatively sophisticated AI. Human cohesion will depend in no small part on how well society will fare when those who worship emerging AI share the planet with those who feel some AI applications making claims on us deserve recognition, those who feel this is essentially an animal-welfare issue, those who think any concern for the “welfare” of an inanimate object is insane, and those who could care less.

There’s No Fire Alarm for Artificial General Intelligence – Machine Intelligence Research Institute

Source: There’s No Fire Alarm for Artificial General Intelligence – Machine Intelligence Research Institute

  What is the function of a fire alarm? One might think that the function of a fire alarm is to provide you with important evidence about a fire existing, allowing you to change your policy accordingly and exit the building. … [but] We don’t want to look panicky by being afraid of what isn’t an emergency, so we try to look calm while glancing out of the corners of our eyes to see how others are reacting, but of course they are also trying to look calm.

A fire alarm creates common knowledge, in the you-know-I-know sense, that there is a fire; after which it is socially safe to react. When the fire alarm goes off, you know that everyone else knows there is a fire, you know you won’t lose face if you proceed to exit the building.

It’s now and then proposed that we ought to start reacting later to the issues of Artificial General Intelligence, because, it is said, we are so far away from it that it just isn’t possible to do productive work on it today. … the implicit alternative strategy on offer is: Wait for some unspecified future event that tells us AGI is coming near; and then we’ll all know that it’s okay to start working on AGI alignment.

This seems to me to be wrong on a number of grounds.

History shows that for the general public, and even for scientists not in a key inner circle, and even for scientists in that key circle, it is very often the case that key technological developments still seem decades away, five years before they show up. … Most of the world learned that atomic weapons were now a thing when they woke up to the headlines about Hiroshima.

Progress is driven by peak knowledge, not average knowledge.

The future uses different tools, and can therefore easily do things that are very hard now, or do with difficulty things that are impossible now.

When I observe that there’s no fire alarm for AGI, I’m not saying that there’s no possible equivalent of smoke appearing from under a door.

What I’m saying rather is that the smoke under the door is always going to be arguable; it is not going to be a clear and undeniable and absolute sign of fire; and so there is never going to be a fire alarm producing common knowledge that action is now due and socially acceptable.

There is never going to be a time before the end when you can look around nervously, and see that it is now clearly common knowledge that you can talk about AGI being imminent, and take action and exit the building in an orderly fashion, without fear of looking stupid or frightened.

By saying we’re probably going to be in roughly this epistemic state until almost the end, I don’t mean to say we know that AGI is imminent, or that there won’t be important new breakthroughs in AI in the intervening time. I mean that it’s hard to guess how many further insights are needed for AGI, or how long it will take to reach those insights. After the next breakthrough, we still won’t know how many more breakthroughs are needed, leaving us in pretty much the same epistemic state as before. Whatever discoveries and milestones come next, it will probably continue to be hard to guess how many further insights are needed, and timelines will continue to be similarly murky.

AlphaGo Zero and the Hanson-Yudkowsky AI-Foom Debate

Source: AlphaGo Zero and the Foom Debate, by Eliezer Yudkowsky

AlphaGo Zero uses 4 TPUs, is built entirely out of neural nets with no handcrafted features, doesn’t pretrain against expert games or anything else human, reaches a superhuman level after 3 days of self-play, and is the strongest version of AlphaGo yet.

The architecture has been simplified. Previous AlphaGo had a policy net that predicted good plays, and a value net that evaluated positions, both feeding into lookahead using MCTS (random probability-weighted plays out to the end of a game). AlphaGo Zero has one neural net that selects moves and this net is trained by Paul Christiano-style capability amplification, playing out games against itself to learn new probabilities for winning moves.

the mighty human edifice of Go knowledge, the joseki and tactics developed over centuries of play, the experts teaching children from an early age, was entirely discarded by AlphaGo Zero with a subsequent performance improvement.

 

Response: What Evidence Is AlphaGo Zero Re AGI Complexity?, by Robin Hanson

Over the history of computer science, we have developed many general tools with simple architectures and built from other general tools, tools that allow super human performance on many specific tasks scattered across a wide range of problem domains. For example, we have superhuman ways to sort lists, and linear regression allows superhuman prediction from simple general tools like matrix inversion. Yet the existence of a limited number of such tools has so far been far from sufficient to enable anything remotely close to human level AGI.

I’m treating it as the difference of learning N simple general tools to learning N+1 such tools. … I disagree with the claim that “this single simple tool gives a bigger advantage on a wider range of tasks than we have seen with previous tools.”

 

RE: The Hanson-Yudkowsky AI-Foom Debate

In late 2008, economist Robin Hanson and AI theorist Eliezer Yudkowsky conducted an online debate about the future of artificial intelligence, and in particular about whether generally intelligent AIs will be able to improve their own capabilities very quickly (a.k.a. “foom”).