As Artificial Intelligence Evolves, So Does Its Criminal Potential – The New York Times

The next generation of online attack tools used by criminals will add machine learning capabilities pioneered by A.I. researchers.

The alarm about malevolent use of advanced artificial intelligence technologies was sounded earlier this year by James R. Clapper, the director of National Intelligence. In his annual review of security, Mr. Clapper underscored the point that while A.I. systems would make some things easier, they would also expand the vulnerabilities of the online world.

“I would argue that companies that offer customer support via chatbots are unwittingly making themselves liable to social engineering,” said Brian Krebs, an investigative reporter who publishes at krebsonsecurity.com.

Source: As Artificial Intelligence Evolves, So Does Its Criminal Potential – The New York Times

Barack Obama on Artificial Intelligence, Autonomous Cars, and the Future of Humanity | WIRED

The president in conversation with MIT’s Joi Ito and WIRED editor-in-chief Scott Dadich.

[Obama:] Joi made a very elegant point, which is, what are the values that we’re going to embed in the cars? There are gonna be a bunch of choices that you have to make, the classic problem being: If the car is driving, you can swerve to avoid hitting a pedestrian, but then you might hit a wall and kill yourself. It’s a moral decision, and who’s setting up those rules?

[Obama:] Part of what makes us human are the kinks. They’re the mutations, the outliers, the flaws that create art or the new invention, right? We have to assume that if a system is perfect, then it’s static. And part of what makes us who we are, and part of what makes us alive, is that we’re dynamic and we’re surprised. One of the challenges that we’ll have to think about is, where and when is it appropriate for us to have things work exactly the way they’re supposed to, without surprises?

DADICH: But there are certainly some risks. We’ve heard from folks like Elon Musk and Nick Bostrom who are concerned about AI’s potential to outpace our ability to understand it. As we move forward, how do we think about those concerns as we try to protect not only ourselves but humanity at scale?

OBAMA: Let me start with what I think is the more immediate concern—it’s a solvable problem in this category of specialized AI, and we have to be mindful of it. If you’ve got a computer that can play Go, a pretty complicated game with a lot of variations, then developing an algorithm that lets you maximize profits on the New York Stock Exchange is probably within sight. And if one person or organization got there first, they could bring down the stock market pretty quickly, or at least they could raise questions about the integrity of the financial markets.

[Obama:] most people aren’t spending a lot of time right now worrying about singularity—they are worrying about “Well, is my job going to be replaced by a machine?” … if we are going to successfully manage this transition, we are going to have to have a societal conversation about how we manage this. … The social compact has to accommodate these new technologies, and our economic models have to accommodate them.

[Obama:] As a consequence, we have to make some tougher decisions. We underpay teachers, despite the fact that it’s a really hard job and a really hard thing for a computer to do well. So for us to reexamine what we value, what we are collectively willing to pay for—whether it’s teachers, nurses, caregivers, moms or dads who stay at home, artists, all the things that are incredibly valuable to us right now but don’t rank high on the pay totem pole—that’s a conversation we need to begin to have.

Source: Barack Obama on Artificial Intelligence, Autonomous Cars, and the Future of Humanity | WIRED

Not OK, Google | TechCrunch

the actual price for building a “personal Google for everyone, everywhere” would in fact be zero privacy for everyone, everywhere.

in order to offer its promise of ‘custom convenience’ — with predictions about restaurants you might like to eat at, say, or suggestions for how bad the traffic might be on your commute to work — it is continuously harvesting and data-mining your personal information, preferences, predilections, peccadilloes, prejudices…

The adtech giant is trying to control the narrative, just as it controls the product experience. So while Google’s CEO talks only about the “amazing things” coming down the pipe in a world where everyone trusts Google with all their data — failing entirely to concede the Big Brother aspect of surveillance-powered AIs — Google’s products are similarly disingenuous; in that they are designed to nudge users to share more and think less.

And that’s truly the opposite of responsible.

Source: Not OK, Google | TechCrunch

Culture – Case Against AI

I was ‘inspired’ to write this article because I read the botifesto “How To Think About Bots”. As I thought the ‘botifesto’ was too pro-bot, I wanted to write an article that takes the anti-bot approach. However, halfway through writing this blog post, I realized that the botifesto…wasn’t written by a bot. In fact, most pro-bot articles have been hand-written by human beings. This is not at all a demonstration of the power of AI; after all, humans have written optimistic proclamations about the future since the dawn of time.

If I am to demonstrate that AI is a threat, I have to also demonstrate that AI can be a threat, and to do that, I have to show what machines are currently capable of doing (in the hopes of provoking a hostile reaction).

So this blog post has been generated by a robot. I have provided all the content, but an algorithm (“Prolefeed”) is responsible for arranging the content in a manner that will please the reader. Here is the source code. And as you browse through it, think of what else can be automated away with a little human creativity. And think whether said automation would be a good thing.

For example, robots are very good at writing 9-page textbooks. Now, I understand that some textbooks can be dry and boring. But it is hard to say that they are not “creative enterprises”.

Here’s a dystopian idea. The term “creative enterprise” is a euphemism to refer to “any activity that cannot be routinely automated away yet”. Any task that we declare ‘a creative expression of the human experience’ will be seen as ‘dull busywork’ as soon as we invent a bot.

Now, some people may argue that these algorithms are not examples of “intelligence”. The obvious conclusion must be that hiring people, beating people at Go, and playing Super Mario must also not be tasks that require intelligence.

Source: Culture – Case Against AI

Human and Artificial Intelligence May Be Equally Impossible to Understand

Despite new biology-like tools, some insist interpretation is impossible.

Even if it were possible to impose this kind of interpretability, it may not always be desirable. The requirement for interpretability can be seen as another set of constraints, preventing a model from a “pure” solution that pays attention only to the input and output data it is given, and potentially reducing accuracy.

“What machines are picking up on are not facts about the world,” Batra says. “They’re facts about the dataset.” That the machines are so tightly tuned to the data they are fed makes it difficult to extract general rules about how they work. More importantly, he cautions, if you don’t know how it works, you don’t know how it will fail. And when they do they fail, in Batra’s experience, “they fail spectacularly disgracefully.”

They pick up on patterns invisible to their engineers; but can’t know which of those patterns exist nowhere else. Machine learning researchers go to great lengths to avoid this phenomenon, called “overfitting,” but as these algorithms are used in more and more dynamic situations, their brittleness will inevitably be exposed.

Source: Human and Artificial Intelligence May Be Equally Impossible to Understand

2016 Report | One Hundred Year Study on Artificial Intelligence (AI100)

The One Hundred Year Study on Artificial Intelligence, launched in the fall of 2014, is a long-term investigation of the field of Artificial Intelligence (AI) and its influences on people, their communities, and society.

Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future. Instead, increasingly useful applications of AI, with potentially profound positive impacts on our society and economy are likely to emerge between now and 2030, the period this report considers. At the same time, many of these developments will spur disruptions in how human labor is augmented or replaced by AI, creating new challenges for the economy and society more broadly.

Innovations relying on computer-based vision, speech recognition, and Natural Language Processing have driven these changes, as have concurrent scientific and technological advances in related fields.

In each domain, even as AI continues to deliver important benefits, it also raises important ethical and social issues, including privacy concerns. Robots and other AI technologies have already begun to displace jobs in some sectors. As a society, we are now at a crucial juncture in determining how to deploy AI-based technologies in ways that promote, not hinder, democratic values such as freedom, equality, and transparency. For individuals, the quality of the lives we lead and how our contributions are valued are likely to shift gradually, but markedly.

Source: 2016 Report | One Hundred Year Study on Artificial Intelligence (AI100)

PDF: Download Full Report