AlphaGo Zero and the Hanson-Yudkowsky AI-Foom Debate

Source: AlphaGo Zero and the Foom Debate, by Eliezer Yudkowsky

AlphaGo Zero uses 4 TPUs, is built entirely out of neural nets with no handcrafted features, doesn’t pretrain against expert games or anything else human, reaches a superhuman level after 3 days of self-play, and is the strongest version of AlphaGo yet.

The architecture has been simplified. Previous AlphaGo had a policy net that predicted good plays, and a value net that evaluated positions, both feeding into lookahead using MCTS (random probability-weighted plays out to the end of a game). AlphaGo Zero has one neural net that selects moves and this net is trained by Paul Christiano-style capability amplification, playing out games against itself to learn new probabilities for winning moves.

the mighty human edifice of Go knowledge, the joseki and tactics developed over centuries of play, the experts teaching children from an early age, was entirely discarded by AlphaGo Zero with a subsequent performance improvement.

 

Response: What Evidence Is AlphaGo Zero Re AGI Complexity?, by Robin Hanson

Over the history of computer science, we have developed many general tools with simple architectures and built from other general tools, tools that allow super human performance on many specific tasks scattered across a wide range of problem domains. For example, we have superhuman ways to sort lists, and linear regression allows superhuman prediction from simple general tools like matrix inversion. Yet the existence of a limited number of such tools has so far been far from sufficient to enable anything remotely close to human level AGI.

I’m treating it as the difference of learning N simple general tools to learning N+1 such tools. … I disagree with the claim that “this single simple tool gives a bigger advantage on a wider range of tasks than we have seen with previous tools.”

 

RE: The Hanson-Yudkowsky AI-Foom Debate

In late 2008, economist Robin Hanson and AI theorist Eliezer Yudkowsky conducted an online debate about the future of artificial intelligence, and in particular about whether generally intelligent AIs will be able to improve their own capabilities very quickly (a.k.a. “foom”).