You Are the Product, by John Lanchester · LRB 17 August 2017

Source: John Lanchester reviews ‘The Attention Merchants’ by Tim Wu, ‘Chaos Monkeys’ by Antonio García Martínez and ‘Move Fast and Break Things’ by Jonathan Taplin · LRB 17 August 2017

I am scared of Facebook. The company’s ambition, its ruthlessness, and its lack of a moral compass scare me.

It’s worth saying ‘Don’t be evil,’ because lots of businesses are. This is especially an issue in the world of the internet. Internet companies are working in a field that is poorly understood (if understood at all) by customers and regulators. The stuff they’re doing, if they’re any good at all, is by definition new. In that overlapping area of novelty and ignorance and unregulation, it’s well worth reminding employees not to be evil, because if the company succeeds and grows, plenty of chances to be evil are going to come along.

In the open air, fake news can be debated and exposed; on Facebook, if you aren’t a member of the community being served the lies, you’re quite likely never to know that they are in circulation. It’s crucial to this that Facebook has no financial interest in telling the truth.

misinformation is in fact spread in a variety of ways:

Information (or Influence) Operations – Actions taken by governments or organised non-state actors to distort domestic or foreign political sentiment.

False News – News articles that purport to be factual, but which contain intentional misstatements of fact with the intention to arouse passions, attract viewership, or deceive.

False Amplifiers – Co-ordinated activity by inauthentic accounts with the intent of manipulating political discussion (e.g. by discouraging specific parties from participating in discussion, or amplifying sensationalistic voices over others).

Disinformation – Inaccurate or manipulated information/content that is spread intentionally. This can include false news, or it can involve more subtle methods, such as false flag operations, feeding inaccurate quotes or stories to innocent intermediaries, or knowingly amplifying biased or misleading information.

For all the talk about connecting people, building community, and believing in people, Facebook is an advertising company. … Facebook is in the surveillance business. … What Facebook does is watch you, and then use what it knows about you and your behaviour to sell ads.

Since there is so much content posted on the site, the algorithms used to filter and direct that content are the thing that determines what you see: people think their news feed is largely to do with their friends and interests, and it sort of is, with the crucial proviso that it is their friends and interests as mediated by the commercial interests of Facebook. Your eyes are directed towards the place where they are most valuable for Facebook.

It’s sort of funny, and also sort of grotesque, that an unprecedentedly huge apparatus of consumer surveillance is fine, apparently, but an unprecedentedly huge apparatus of consumer surveillance which results in some people paying higher prices may well be illegal.

In developed countries where Facebook has been present for years, use of the site peaks at about 75 per cent of the population (that’s in the US). That would imply a total potential audience for Facebook of 1.95 billion. At two billion monthly active users, Facebook has already gone past that number, and is running out of connected humans.

Whatever comes next will take us back to those two pillars of the company, growth and monetisation. Growth can only come from connecting new areas of the planet. … Here in the rich world, the focus is more on monetisation

Automation and artificial intelligence are going to have a big impact in all kinds of worlds. These technologies are new and real and they are coming soon. Facebook is deeply interested in these trends. We don’t know where this is going, we don’t know what the social costs and consequences will be, we don’t know what will be the next area of life to be hollowed out, the next business model to be destroyed, the next company to go the way of Polaroid or the next business to go the way of journalism or the next set of tools and techniques to become available to the people who used Facebook to manipulate the elections of 2016. We just don’t know what’s next, but we know it’s likely to be consequential, and that a big part will be played by the world’s biggest social network. On the evidence of Facebook’s actions so far, it’s impossible to face this prospect without unease.

“Don’t Be a Sucker”, U.S. War Department, 1947

Source: Don’t Be a Sucker : U.S. War Department : Free Download & Streaming : Internet Archive

Your right to belong to minorities is a precious thing. You have a right to be what you are and to say what you think, because here we have personal freedom. We have liberty. And these are not just fancy words. This is a practical and priceless way of living. But we must work it. We must guard everyone’s liberties or we can lose our own. If we allow any minority to lose its freedom by persecution or by prejudice, we are threatening our own freedom. And this is not simply an idea. This is good, hard, common sense. You see, here in America, it’s not a question of whether we tolerate minorities. America is minorities. And that means you, and me.

Is the World Slouching Toward a Grave Systemic Crisis? – The Atlantic

Source: Is the World Slouching Toward a Grave Systemic Crisis? – The Atlantic, by Philip Zelikow

keynote address at the annual meeting of the Aspen Strategy Group … In this speech he reflects on the much-discussed concept of “world order,” interrogates the claim that a “more open” world is really better for Americans, and issues a warning about America’s world leadership.

History is punctuated by catalytic episodes—events that can become guideposts toward a more open and civilized world.

Power worship blurs political judgment because it leads, almost unavoidably, to the belief that present trends will continue.

The so-called “world order” is really the accumulation of local problem-solving.

James Burnham’s The Managerial State vs. George Orwell’s “open and civilized societies”.

State the questions another way: Do open societies really work better than closed ones? Is a more open and civilized world really safer and better for Americans? If we think yes, then what is the best way to prove that point?

Where The Falling Einstein Meets The Rising Mouse | Slate Star Codex

Source: Where The Falling Einstein Meets The Rising Mouse | Slate Star Codex


we naturally think there’s a pretty big intellectual difference between mice and chimps, and a pretty big intellectual difference between normal people and Einstein, and implicitly treat these as about equal in degree. But in any objective terms we choose – amount of evolutionary work it took to generate the difference, number of neurons, measurable difference in brain structure, performance on various tasks, etc – the gap between mice and chimps is immense, and the difference between an average Joe and Einstein trivial in comparison.

But Katja Grace takes a broader perspective and finds the opposite.
… So how can one reconcile the common-sense force of Eliezer’s argument with the empirical force of Katja’s contrary data?

How does this relate to our original concern – how fast we expect AI to progress?

There are no free lunches, but organic lunches are super expensive: Why the tradeoffs constraining human cognition do not limit artificial superintelligences | Hypermagical Ultraomnipotence

Source: There are no free lunches, but organic lunches are super expensive: Why the tradeoffs constraining human cognition do not limit artificial superintelligences | Hypermagical Ultraomnipotence, by Anni Leskela

In this post, I argue against the brand of AI risk skepticism that is based on what we know about organic, biologically evolved intelligence and its constraints, recently promoted by Kevin Kelly on Wired and expanded by Erik Hoel in his blog.

below, “cognition” usually just refers to the skillsets related to predicting and influencing our actual world

If value alignment fails, we don’t know how competent an inhuman AI needs to be to reach existentially threatening powers

the [intelligence] growth rate doesn’t need to be literally exponential to pose an existential risk – with or without intentional treachery, we will still not be able to comprehend what’s going on after a while of recursive improvement, and roughly linear or irregular growth could still get faster than what we can keep track of. And … the eventual results could look rather explosive

a superintelligence doesn’t need to do human-style thinking to be dangerous

There are eventual constraints for intelligences implemented in silicon too, but it seems to me that these are unlikely to apply before they’re way ahead of us, because the materials and especially the algorithms and directions of a developing superintelligence are intentionally chosen and optimized for useful cognition, not for replicating in the primordial soup and proliferating in the organic world with weird restrictions such as metabolism and pathogens and communities of similar brains you need to cooperate with to get anything done.