Future AI R&D

Source: A Meta Lesson, by Andy Kitchen

I’m going to try and summarise both their positions in a few sentences, but you should definitely read both essays, especially as they are so short.

Rich Sutton (approx.): learning and search always outperform hand-crafted solutions given enough compute.

Rodney Brooks (approx.): No, human ingenuity is actually responsible for progress in AI. We can’t just solve problems by throwing more compute at them.

I think both positions are interesting, important and well supported by evidence. But if you read both essays, you’ll see that these positions are also not mutually exclusive, in fact they can be synthesised. But to accept this interpretation you need to take your view one level ‘up’, so to speak.

Rich Sutton isn’t arguing for wasteful learning and search, he’s calling on us to improve it. He is saying we’ll never be able to go back to hand-written StarCraft bots.

The meta lesson is that the most important thing to improve with search and learning — is learning itself.

 

RE: A Better Lesson, by Rodney Brooks

I think a better lesson to be learned is that we have to take into account the total cost of any solution, and that so far they have all required substantial amounts of human ingenuity.

 

RE: The Bitter Lesson, by Rich Sutton

We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done.

Ways to think about machine learning

Source: Ways to think about machine learning, by Benedict Evans

[Thinking about the invention of relational databases] is a good grounding way to think about machine learning today – it’s a step change in what we can do with computers, and that will be part of many different products for many different companies. Eventually, pretty much everything will have ML somewhere inside and no-one will care.

An important parallel here is that though relational databases had economy of scale effects, there were limited network or ‘winner takes all’ effects.

with each wave of automation, we imagine we’re creating something anthropomorphic or something with general intelligence. In the 1920s and 30s we imagined steel men walking around factories holding hammers, and in the 1950s we imagined humanoid robots walking around the kitchen doing the housework. We didn’t get robot servants – we got washing machines.

Washing machines are robots, but they’re not ‘intelligent’. They don’t know what water or clothes are. Moreover, they’re not general purpose even in the narrow domain of washing … Equally, machine learning lets us solve classes of problem that computers could not usefully address before, but each of those problems will require a different implementation, and different data, a different route to market, and often a different company. Each of them is a piece of automation. Each of them is a washing machine.

one of my colleagues suggested that machine learning will be able to do anything you could train a dog to do, which is also a useful way to think about AI bias (What exactly has the dog learnt? What was in the training data? Are you sure? How do you ask?), but also limited because dogs do have general intelligence and common sense, unlike any neural network we know how to build. Andrew Ng has suggested that ML will be able to do anything you could do in less than one second. Talking about ML does tend to be a hunt for metaphors, but I prefer the metaphor that this gives you infinite interns, or, perhaps, infinite ten year olds.

In a sense, this is what automation always does; Excel didn’t give us artificial accountants, Photoshop and Indesign didn’t give us artificial graphic designers and indeed steam engines didn’t give us artificial horses. (In an earlier wave of ‘AI’, chess computers didn’t give us a grumpy middle-aged Russian in a box.) Rather, we automated one discrete task, at massive scale.

AI winter is well on its way – Piekniewski’s blog

Source: AI winter is well on its way, by Filip Piekniewski

OK, so we can now train AlexNet in minutes rather than days, but can we train a 1000x bigger AlexNet in days and get qualitatively better results? Apparently not…

So in fact, this graph which was meant to show how well deep learning scales, indicates the exact opposite. We can’t just scale up AlexNet and get respectively better results – we have to fiddle with specific architectures, and effectively additional compute does not buy much without order of magnitude more data samples, which are in practice only available in simulated game environments.

How the Enlightenment Ends – The Atlantic, by Henry A. Kissinger

Source: How the Enlightenment Ends – The Atlantic, by Henry A. Kissinger

Philosophically, intellectually—in every way—human society is unprepared for the rise of artificial intelligence.

What would be the impact on history of self-learning machines—machines that acquired knowledge by processes particular to themselves, and applied that knowledge to ends for which there may be no category of human understanding? … How would choices be made among emerging options?

If AI learns exponentially faster than humans, we must expect it to accelerate, also exponentially, the trial-and-error process by which human decisions are generally made: to make mistakes faster and of greater magnitude than humans do. It may be impossible to temper those mistakes, as researchers in AI often suggest, by including in a program caveats requiring “ethical” or “reasonable” outcomes. Entire academic disciplines have arisen out of humanity’s inability to agree upon how to define these terms. Should AI therefore become their arbiter?

Ultimately, the term artificial intelligence may be a misnomer. To be sure, these machines can solve complex, seemingly abstract problems that had previously yielded only to human cognition. But what they do uniquely is not thinking as heretofore conceived and experienced. Rather, it is unprecedented memorization and computation. Because of its inherent superiority in these fields, AI is likely to win any game assigned to it. But for our purposes as humans, the games are not only about winning; they are about thinking. By treating a mathematical process as if it were a thought process, and either trying to mimic that process ourselves or merely accepting the results, we are in danger of losing the capacity that has been the essence of human cognition.