How the Enlightenment Ends – The Atlantic, by Henry A. Kissinger

Source: How the Enlightenment Ends – The Atlantic, by Henry A. Kissinger

Philosophically, intellectually—in every way—human society is unprepared for the rise of artificial intelligence.

What would be the impact on history of self-learning machines—machines that acquired knowledge by processes particular to themselves, and applied that knowledge to ends for which there may be no category of human understanding? … How would choices be made among emerging options?

If AI learns exponentially faster than humans, we must expect it to accelerate, also exponentially, the trial-and-error process by which human decisions are generally made: to make mistakes faster and of greater magnitude than humans do. It may be impossible to temper those mistakes, as researchers in AI often suggest, by including in a program caveats requiring “ethical” or “reasonable” outcomes. Entire academic disciplines have arisen out of humanity’s inability to agree upon how to define these terms. Should AI therefore become their arbiter?

Ultimately, the term artificial intelligence may be a misnomer. To be sure, these machines can solve complex, seemingly abstract problems that had previously yielded only to human cognition. But what they do uniquely is not thinking as heretofore conceived and experienced. Rather, it is unprecedented memorization and computation. Because of its inherent superiority in these fields, AI is likely to win any game assigned to it. But for our purposes as humans, the games are not only about winning; they are about thinking. By treating a mathematical process as if it were a thought process, and either trying to mimic that process ourselves or merely accepting the results, we are in danger of losing the capacity that has been the essence of human cognition.

Deep Reinforcement Learning Doesn’t Work Yet, by Alex Irpan

Source: Deep Reinforcement Learning Doesn’t Work Yet, by Alex Irpan

here are some of the failure cases of deep RL.

Deep Reinforcement Learning Can Be Horribly Sample Inefficient

If You Just Care About Final Performance, Many Problems are Better Solved by Other Methods

Reinforcement Learning Usually Requires a Reward Function

Reward Function Design is Difficult

Even Given a Good Reward, Local Optima Can Be Hard To Escape

Even When Deep RL Works, It May Just Be Overfitting to Weird Patterns In the Environment

Even Ignoring Generalization Issues, The Final Results Can be Unstable and Hard to Reproduce

The way I see it, either deep RL is still a research topic that isn’t robust enough for widespread use, or it’s usable and the people who’ve gotten it to work aren’t publicizing it. I think the former is more likely.

My feelings are best summarized by a mindset Andrew Ng mentioned in his Nuts and Bolts of Applying Deep Learning talk – a lot of short-term pessimism, balanced by even more long-term optimism.

Optimization over Explanation – Berkman Klein Center Collection – Medium

Source: Optimization over Explanation – Berkman Klein Center Collection – Medium, by David Weinberger

Govern the optimizations. Patrol the results.

Human-constructed models aim at reducing the variables to a set small enough for our intellects to understand. Machine learning models can construct models that work — for example, they accurately predict the probability of medical conditions — but that cannot be reduced enough for humans to understand or to explain them.

AI systems ought to be required to declare what they are optimized for.

we don’t have to treat AI as a new form of life that somehow escapes human moral questions. We can treat it as what it is: a tool that should be measured by how much better it is at doing something compared to our old way of doing it: Does it save more lives? Does it improve the environment? Does it give us more leisure? Does it create jobs? Does it make us more social, responsible, caring? Does it accomplish these goals while supporting crucial social values such as fairness?

By treating the governance of AI as a question of optimizations, we can focus the necessary argument about them on what truly matters: What is it that we as a society want from a system, and what are we willing to give up to get it?

Raise AIs like parents, not programmers—or they’ll turn into terrible toddlers

Source: Raise AIs like parents, not programmers—or they’ll turn into terrible toddlers

Creating a safe AI is not that different than raising a decent human. … We can apply some important lessons we teach to young humans to how we govern AI:

  1. Keep an open mind
  2. Be fair
  3. Be kind


Like human brains, machine-learning algorithms assess how to act based on past experiences: They create decision pathways based on the data they have seen. If the data they’re exposed to is limited, their understanding of the real-time information they process will be, too.

Our ability to trust is underpinned by fairness. It is so essential that children as young as four will detect and react to unfairness. But in order to verify fairness, one must have access to the decisions that are being made. … The answer was produced in what’s referred to as a “black box”: an electronic system completely closed to analysis or inspection. This dismissive “because I said so” approach does not build a sense of fairness or trust in either AI or children. … AI needs to not only produce, but also explain the answer it creates.

Understanding the process of decision-making is necessary but not alone sufficient—sometimes we need to improve the rules that we follow in the first place. This requires two key traits: empathy and imagination. … AI needs to learn the same skills.

We need a super-Turing test that reflects humanity as we want it to be when it grows up: not just human, but one that is kind, fair, and has an open mind.