We’re releasing Universe, a software platform for measuring and training an AI’s general intelligence across the world’s supply of games, websites and other applications.
Source: Universe
We’re releasing Universe, a software platform for measuring and training an AI’s general intelligence across the world’s supply of games, websites and other applications.
Source: Universe
The next generation of online attack tools used by criminals will add machine learning capabilities pioneered by A.I. researchers.
The alarm about malevolent use of advanced artificial intelligence technologies was sounded earlier this year by James R. Clapper, the director of National Intelligence. In his annual review of security, Mr. Clapper underscored the point that while A.I. systems would make some things easier, they would also expand the vulnerabilities of the online world.
“I would argue that companies that offer customer support via chatbots are unwittingly making themselves liable to social engineering,” said Brian Krebs, an investigative reporter who publishes at krebsonsecurity.com.
Source: As Artificial Intelligence Evolves, So Does Its Criminal Potential – The New York Times
The president in conversation with MIT’s Joi Ito and WIRED editor-in-chief Scott Dadich.
[Obama:] Joi made a very elegant point, which is, what are the values that we’re going to embed in the cars? There are gonna be a bunch of choices that you have to make, the classic problem being: If the car is driving, you can swerve to avoid hitting a pedestrian, but then you might hit a wall and kill yourself. It’s a moral decision, and who’s setting up those rules?
[Obama:] Part of what makes us human are the kinks. They’re the mutations, the outliers, the flaws that create art or the new invention, right? We have to assume that if a system is perfect, then it’s static. And part of what makes us who we are, and part of what makes us alive, is that we’re dynamic and we’re surprised. One of the challenges that we’ll have to think about is, where and when is it appropriate for us to have things work exactly the way they’re supposed to, without surprises?
DADICH: But there are certainly some risks. We’ve heard from folks like Elon Musk and Nick Bostrom who are concerned about AI’s potential to outpace our ability to understand it. As we move forward, how do we think about those concerns as we try to protect not only ourselves but humanity at scale?
OBAMA: Let me start with what I think is the more immediate concern—it’s a solvable problem in this category of specialized AI, and we have to be mindful of it. If you’ve got a computer that can play Go, a pretty complicated game with a lot of variations, then developing an algorithm that lets you maximize profits on the New York Stock Exchange is probably within sight. And if one person or organization got there first, they could bring down the stock market pretty quickly, or at least they could raise questions about the integrity of the financial markets.
[Obama:] most people aren’t spending a lot of time right now worrying about singularity—they are worrying about “Well, is my job going to be replaced by a machine?” … if we are going to successfully manage this transition, we are going to have to have a societal conversation about how we manage this. … The social compact has to accommodate these new technologies, and our economic models have to accommodate them.
[Obama:] As a consequence, we have to make some tougher decisions. We underpay teachers, despite the fact that it’s a really hard job and a really hard thing for a computer to do well. So for us to reexamine what we value, what we are collectively willing to pay for—whether it’s teachers, nurses, caregivers, moms or dads who stay at home, artists, all the things that are incredibly valuable to us right now but don’t rank high on the pay totem pole—that’s a conversation we need to begin to have.
Source: Barack Obama on Artificial Intelligence, Autonomous Cars, and the Future of Humanity | WIRED
the actual price for building a “personal Google for everyone, everywhere” would in fact be zero privacy for everyone, everywhere.
in order to offer its promise of ‘custom convenience’ — with predictions about restaurants you might like to eat at, say, or suggestions for how bad the traffic might be on your commute to work — it is continuously harvesting and data-mining your personal information, preferences, predilections, peccadilloes, prejudices…
The adtech giant is trying to control the narrative, just as it controls the product experience. So while Google’s CEO talks only about the “amazing things” coming down the pipe in a world where everyone trusts Google with all their data — failing entirely to concede the Big Brother aspect of surveillance-powered AIs — Google’s products are similarly disingenuous; in that they are designed to nudge users to share more and think less.
And that’s truly the opposite of responsible.
Source: Not OK, Google | TechCrunch
I was ‘inspired’ to write this article because I read the botifesto “How To Think About Bots”. As I thought the ‘botifesto’ was too pro-bot, I wanted to write an article that takes the anti-bot approach. However, halfway through writing this blog post, I realized that the botifesto…wasn’t written by a bot. In fact, most pro-bot articles have been hand-written by human beings. This is not at all a demonstration of the power of AI; after all, humans have written optimistic proclamations about the future since the dawn of time.
If I am to demonstrate that AI is a threat, I have to also demonstrate that AI can be a threat, and to do that, I have to show what machines are currently capable of doing (in the hopes of provoking a hostile reaction).
So this blog post has been generated by a robot. I have provided all the content, but an algorithm (“Prolefeed”) is responsible for arranging the content in a manner that will please the reader. Here is the source code. And as you browse through it, think of what else can be automated away with a little human creativity. And think whether said automation would be a good thing.
For example, robots are very good at writing 9-page textbooks. Now, I understand that some textbooks can be dry and boring. But it is hard to say that they are not “creative enterprises”.
Here’s a dystopian idea. The term “creative enterprise” is a euphemism to refer to “any activity that cannot be routinely automated away yet”. Any task that we declare ‘a creative expression of the human experience’ will be seen as ‘dull busywork’ as soon as we invent a bot.
Now, some people may argue that these algorithms are not examples of “intelligence”. The obvious conclusion must be that hiring people, beating people at Go, and playing Super Mario must also not be tasks that require intelligence.
Source: Culture – Case Against AI