Just A New Fractal Detail In The Big Picture – Edge.org

I have a huge amount of experience in being ignorant and not worrying about it. In fact, what I call “understanding” turns out to be “managing my ignorance more effectively.”

My untroubled attitude results from my almost absolute faith in the reliability of the vast supercomputer I’m permanently plugged into. It was built with the intelligence of thousands of generations of human minds, and they’re still working at it now. All that human intelligence remains alive in the form of the supercomputer of tools, theories, technologies, crafts, sciences, disciplines, customs, rituals, rules-of-thumb, arts, systems of belief, superstitions, work-arounds, and observations that we call Global Civilisation.

Isn’t the vast structure of competences and potentialities thus created indistinguishable from “artificial intelligence”? The type that digital computers make is just a new fractal detail in the big picture, just the latest step. We’ve been living happily with artificial intelligence for thousands of years.

Source: Just A New Fractal Detail In The Big Picture – Edge.org

English robots will miss their big shot for a “bill of rights” when Brexit takes hold

John Danaher, law lecturer at NUI Galway university in Ireland, with a focus on emerging technologies, says that the proposed robot rights are similar to the legal personhood awarded to corporations.

As Britain makes plans to withdraw from the EU, MEPs will vote on the robot proposals within the next year. If passed, it will then take further time for the plans to be drawn up as laws and be implemented. By that time, the UK may well have left the union.

Source: English robots will miss their big shot for a “bill of rights” when Brexit takes hold

RE: Draft Report: with recommendations to the Commission on Civil Law Rules on Robotics

The Machines Are Coming – The New York Times

Low-wage jobs are no longer the only ones at risk.

This cannot just be about machines’ capabilities or human skills, since the true solution lies in neither. Confronting the threat posed by machines, and the way in which the great data harvest has made them ever more able to compete with human workers, must be about our priorities.

It’s easy to imagine an alternate future where advanced machine capabilities are used to empower more of us, rather than control most of us. There will potentially be more time, resources and freedom to share, but only if we change how we do things. We don’t need to reject or blame technology. This problem is not us versus the machines, but between us, as humans, and how we value one another.

Source: The Machines Are Coming – The New York Times

 

Also, see: SMBC 3711

Machine intelligence – Sam Altman

WHY YOU SHOULD FEAR MACHINE INTELLIGENCE

SMI does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out. Certain goals, like self-preservation, could clearly benefit from no humans. We wash our hands not because we actively wish ill towards the bacteria and viruses on them, but because we don’t want them to get in the way of our plans.

It’s very hard to know how close we are to machine intelligence surpassing human intelligence. Progression of machine intelligence is a double exponential function; human-written programs and computing power are getting better at an exponential rate, and self-learning/self-improving software will improve itself at an exponential rate. Development progress may look relatively slow and then all of a sudden go vertical—things could get out of control very quickly (it also may be more gradual and we may barely perceive it happening).

Because we don’t understand how human intelligence works in any meaningful way, it’s difficult to make strong statements about how close or far away from emulating it we really are. We could be completely off track, or we could be one algorithm away.

I prefer calling it “machine intelligence” and not “artificial intelligence” because artificial seems to imply it’s not real or not very good. When it gets developed, there will be nothing artificial about it.

Source: Machine intelligence, part 1 – Sam Altman

 

THE NEED FOR REGULATION

we will face this threat at some point, and we have a lot of work to do before it gets here.

it seems like what happens with the first SMI to be developed will be very important.

I mean for this to be the beginning of a conversation, not the end of one.

Provide a framework to observe progress. … require development safeguards to reduce the risk of the accident case. … humans will always be the weak link in the strategy (see the AI-in-a-box thought experiments) … Require that the first SMI developed have as part of its operating rules that a) it can’t cause any direct or indirect harm to humanity (i.e. Asimov’s zeroeth law), b) it should detect other SMI being developed but take no action beyond detection, c) other than required for part b, have no effect on the world.

In politics, we usually fight over small differences. These differences pale in comparison to the difference between humans and aliens, which is what SMI will effectively be like. We should be able to come together and figure out a regulatory strategy quickly.

Source: Machine intelligence, part 2 – Sam Altman

 

The AI-Box Experiment by Eliezer S. Yudkowsky
AI-box experiment on RationalWiki
AI box on Wikipedia