Machine intelligence – Sam Altman

WHY YOU SHOULD FEAR MACHINE INTELLIGENCE

SMI does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out. Certain goals, like self-preservation, could clearly benefit from no humans. We wash our hands not because we actively wish ill towards the bacteria and viruses on them, but because we don’t want them to get in the way of our plans.

It’s very hard to know how close we are to machine intelligence surpassing human intelligence. Progression of machine intelligence is a double exponential function; human-written programs and computing power are getting better at an exponential rate, and self-learning/self-improving software will improve itself at an exponential rate. Development progress may look relatively slow and then all of a sudden go vertical—things could get out of control very quickly (it also may be more gradual and we may barely perceive it happening).

Because we don’t understand how human intelligence works in any meaningful way, it’s difficult to make strong statements about how close or far away from emulating it we really are. We could be completely off track, or we could be one algorithm away.

I prefer calling it “machine intelligence” and not “artificial intelligence” because artificial seems to imply it’s not real or not very good. When it gets developed, there will be nothing artificial about it.

Source: Machine intelligence, part 1 – Sam Altman

 

THE NEED FOR REGULATION

we will face this threat at some point, and we have a lot of work to do before it gets here.

it seems like what happens with the first SMI to be developed will be very important.

I mean for this to be the beginning of a conversation, not the end of one.

Provide a framework to observe progress. … require development safeguards to reduce the risk of the accident case. … humans will always be the weak link in the strategy (see the AI-in-a-box thought experiments) … Require that the first SMI developed have as part of its operating rules that a) it can’t cause any direct or indirect harm to humanity (i.e. Asimov’s zeroeth law), b) it should detect other SMI being developed but take no action beyond detection, c) other than required for part b, have no effect on the world.

In politics, we usually fight over small differences. These differences pale in comparison to the difference between humans and aliens, which is what SMI will effectively be like. We should be able to come together and figure out a regulatory strategy quickly.

Source: Machine intelligence, part 2 – Sam Altman

 

The AI-Box Experiment by Eliezer S. Yudkowsky
AI-box experiment on RationalWiki
AI box on Wikipedia