AI winter is well on its way – Piekniewski’s blog

Source: AI winter is well on its way, by Filip Piekniewski

OK, so we can now train AlexNet in minutes rather than days, but can we train a 1000x bigger AlexNet in days and get qualitatively better results? Apparently not…

So in fact, this graph which was meant to show how well deep learning scales, indicates the exact opposite. We can’t just scale up AlexNet and get respectively better results – we have to fiddle with specific architectures, and effectively additional compute does not buy much without order of magnitude more data samples, which are in practice only available in simulated game environments.

How the Enlightenment Ends – The Atlantic, by Henry A. Kissinger

Source: How the Enlightenment Ends – The Atlantic, by Henry A. Kissinger

Philosophically, intellectually—in every way—human society is unprepared for the rise of artificial intelligence.

What would be the impact on history of self-learning machines—machines that acquired knowledge by processes particular to themselves, and applied that knowledge to ends for which there may be no category of human understanding? … How would choices be made among emerging options?

If AI learns exponentially faster than humans, we must expect it to accelerate, also exponentially, the trial-and-error process by which human decisions are generally made: to make mistakes faster and of greater magnitude than humans do. It may be impossible to temper those mistakes, as researchers in AI often suggest, by including in a program caveats requiring “ethical” or “reasonable” outcomes. Entire academic disciplines have arisen out of humanity’s inability to agree upon how to define these terms. Should AI therefore become their arbiter?

Ultimately, the term artificial intelligence may be a misnomer. To be sure, these machines can solve complex, seemingly abstract problems that had previously yielded only to human cognition. But what they do uniquely is not thinking as heretofore conceived and experienced. Rather, it is unprecedented memorization and computation. Because of its inherent superiority in these fields, AI is likely to win any game assigned to it. But for our purposes as humans, the games are not only about winning; they are about thinking. By treating a mathematical process as if it were a thought process, and either trying to mimic that process ourselves or merely accepting the results, we are in danger of losing the capacity that has been the essence of human cognition.

Deep Reinforcement Learning Doesn’t Work Yet, by Alex Irpan

Source: Deep Reinforcement Learning Doesn’t Work Yet, by Alex Irpan

here are some of the failure cases of deep RL.

Deep Reinforcement Learning Can Be Horribly Sample Inefficient

If You Just Care About Final Performance, Many Problems are Better Solved by Other Methods

Reinforcement Learning Usually Requires a Reward Function

Reward Function Design is Difficult

Even Given a Good Reward, Local Optima Can Be Hard To Escape

Even When Deep RL Works, It May Just Be Overfitting to Weird Patterns In the Environment

Even Ignoring Generalization Issues, The Final Results Can be Unstable and Hard to Reproduce

The way I see it, either deep RL is still a research topic that isn’t robust enough for widespread use, or it’s usable and the people who’ve gotten it to work aren’t publicizing it. I think the former is more likely.

My feelings are best summarized by a mindset Andrew Ng mentioned in his Nuts and Bolts of Applying Deep Learning talk – a lot of short-term pessimism, balanced by even more long-term optimism.

Optimization over Explanation – Berkman Klein Center Collection – Medium

Source: Optimization over Explanation – Berkman Klein Center Collection – Medium, by David Weinberger

Govern the optimizations. Patrol the results.

Human-constructed models aim at reducing the variables to a set small enough for our intellects to understand. Machine learning models can construct models that work — for example, they accurately predict the probability of medical conditions — but that cannot be reduced enough for humans to understand or to explain them.

AI systems ought to be required to declare what they are optimized for.

we don’t have to treat AI as a new form of life that somehow escapes human moral questions. We can treat it as what it is: a tool that should be measured by how much better it is at doing something compared to our old way of doing it: Does it save more lives? Does it improve the environment? Does it give us more leisure? Does it create jobs? Does it make us more social, responsible, caring? Does it accomplish these goals while supporting crucial social values such as fairness?

By treating the governance of AI as a question of optimizations, we can focus the necessary argument about them on what truly matters: What is it that we as a society want from a system, and what are we willing to give up to get it?