Against an Increasingly User-Hostile Web – Neustadt.fr

Source: Against an Increasingly User-Hostile Web – Neustadt.fr, by Parimal Satyal

We’re very good at talking about immersive experiences, personalized content, growth hacking, responsive strategy, user centered design, social media activation, retargeting, CMS and user experience. But behind all this jargon lurks the uncomfortable idea that we might be accomplices in the destruction of a platform that was meant to empower and bring people together; the possibility that we are instead building a machine that surveils, subverts, manipulates, overwhelms and exploits people.

It all comes down a simple but very dangerous shift: the major websites of today’s web are not built for the visitor, but as means of using her. Our visitor has become a data point, a customer profile, a potential lead — a proverbial fly in the spider’s web. In the guise of user centered design, we’re building an increasingly user-hostile web.

[In the beginning], anyone could put a document on the web and any document could link to any other. It created a completely open platform where a writer in Nepal could freely share her ideas with a dancer in Denmark. A climate science student in Nairobi could access data from the McMurdo weather station in Antarctica. You could start reading about logical fallacies and end up on a website about optical illusions. Read about the history of time-keeping and end up learning about Einstein’s special theory of relativity. All interests were catered to. Information could truly be free: transverse borders, cultures and politics.

The modern web is different.

It’s naturally different from a technological standpoint: we have faster connections, better browser standards, tighter security and new media formats. But it is also different in the values it espouses. Today, we are so far from that initial vision of linking documents to share knowledge that it’s hard to simply browse the web for information without constantly being asked to buy something, like something, follow someone, share the page on Facebook or sign up to some newsletter. All the while being tracked and profiled.

In the guise of being user-centered, the modern web has become user-hostile.

Almost every website you go to today reports your activities to third parties that you most likely neither know nor trust.

The goal? Craft hyper-personalized messages to change voting behavior based on your individual personalities, and by extension, your attitudes, opinions and fears. … You become a manipulable data point at the mercy of big corporations who sell their ability to manipulate you based on the data you volunteer. … you volunteer yourself on social platforms like Facebook, Twitter and Instagram. The little share buttons you see on websites aren’t just there to make it easy for you to post a link to Facebook; they also allow Facebook to be present and gather information about you from pretty much any website.

If you run a website and you put official share buttons on your website, use intrusive analytics platforms, serve ads through a third-party ad network or use pervasive cookies to share and sell data on your users, you’re contributing to a user-hostile web. You’re using free and open-source tools created by thousands of collaborators around the world, over an open web and in the spirit of sharing, to subvert users.

most of the time we spend on the web today is no longer on the open Internet – it’s on private services like Facebook, Twitter and LinkedIn. While Facebook provides a valuable service, it is also a for-profit, company. … To use their platform, you have to agree to whatever conditions they set, however absurd. If you replace the open web with Facebook, you’re giving up your right to publish and share on your terms. The data that you post there does not belong to you; you’re putting it in a closed system. If one day Facebook decides to shut down — unlikely as that might seem today — your data goes with it. Sure, you might be able to download parts of it, but then what?

This works because they know you’ll agree to it. You’ll say you don’t have a choice, because your friends are all there — the infamous “network effect”. This is Facebook’s currency, its source of strength but also a crucial dependency.

And this is what we often fail to realize: without its users —- without you -— Facebook would be nothing. But without Facebook, you would only be inconvenienced. Facebook needs you more than you need it.

What I’m against is the centralization of services; Facebook and Google are virtually everywhere today. Through share buttons, free services, mobile applications, login gateways and analytics, they are able to be present on virtually every website you visit. This gives them immense power and control. They get to unilaterally made decisions that affect our collective behavior, our expectations and our well-being.

the browser you’re reading this on (Chrome, Firefox, Links, whatever), the web server that’s hosting this website (Nginx), the operating system that this server runs on (Ubuntu), the programming tools used to make it all work (python, gcc, node.js…) — all of these things were created collectively by contributors all around the world, brought together by HTTP. And given away for free in the spirit of sharing.

The web is open by design and built to empower people. This is the web we’re breaking and replacing with one that subverts, manipulates and creates new needs and addiction.

It all comes down to one simple question: what do we want the web to be?

Do we want the web to be open, accessible, empowering and collaborative? Free, in the spirit of CERN’s decision in 1993 or that open source tools it’s built on? Or do we want it to be just another means of endless consumption, where people become eyeballs, targets and profiles? Where companies use your data to control your behaviour and which enables a surveillance society — what do we want?

For me, the choice is clear. And it’s something worth fighting for.

Non-Expert Explanation | Slate Star Codex

Source: Non-Expert Explanation | Slate Star Codex

Some knowledge is easy to transfer. “What is the thyroid?” Some expert should write an explanation, anyone interested can read it, and nobody else should ever worry about it again.

Other knowledge is near-impossible to transfer. What about social skills? There are books on social skills. But you can’t just read one and instantly become as charismatic as the author. At best they can hint at areas worth exploring. … even after reading the best, most perfect-fit social skills book in the world, it’s still not going to be enough. People need to ask questions. … And questioning requires mental fit at least as much as straight information-transfer does.

the process of coming to understand a field at all has to involve this pattern of back-and-forth questioning, approaching from multiple sides, devil-advocating, etc. Lots of the process will look the same whether you end out ultimately rejecting or accepting a truth; you’ve got to go through the same steps just to understand what you’re considering.

The Internet seems like an increasingly hostile place for this sort of thing.
… This is a shame. The authoritative-lecture format works for facts, but isn’t enough when you’ve got any subject more complicated than thyroid anatomy. Collaborative truth-seeking where people are throwing out ideas, trying to reconstruct arguments themselves, asking questions, and arguing – these are more promising, but they leave you open to accusations of reinventing the wheel, arrogantly dabbling in fields you don’t understand, or being too insular. When some of the topics involved are taboo, add the sins of “just asking questions” or “thinking it’s my job to educate you”. But unless you’re such a good lecturer that everybody will understand you on the first try, this is a necessary part of communicating hard things.

 

More: Extensions and Intensions – Less Wrong, by Eliezer Yudkowsky

You can’t capture in words all the details of the cognitive concept—as it exists in your mind—that lets you recognize things as tigers or nontigers. It’s too large. And you can’t point to all the tigers you’ve ever seen, let alone everything you would call a tiger.

The strongest definitions use a crossfire of intensional and extensional communication to nail down a concept. Even so, you only communicate maps to concepts, or instructions for building concepts—you don’t communicate the actual categories as they exist in your mind or in the world.

The Law of Continued Failure

The law of continued failure is the rule that says that if your country is incompetent enough to use a plaintext 9-numeric-digit password on all of your bank accounts and credit applications, your country is not competent enough to correct course after the next disaster in which a hundred million passwords are revealed. A civilization competent enough to correct course in response to that prod, to react to it the way you’d want them to react, is competent enough not to make the mistake in the first place. When a system fails massively and obviously, rather than subtly and at the very edges of competence, the next prod is not going to cause the system to suddenly snap into doing things intelligently.

There’s No Fire Alarm for Artificial General Intelligence – Machine Intelligence Research Institute

Source: There’s No Fire Alarm for Artificial General Intelligence – Machine Intelligence Research Institute

  What is the function of a fire alarm? One might think that the function of a fire alarm is to provide you with important evidence about a fire existing, allowing you to change your policy accordingly and exit the building. … [but] We don’t want to look panicky by being afraid of what isn’t an emergency, so we try to look calm while glancing out of the corners of our eyes to see how others are reacting, but of course they are also trying to look calm.

A fire alarm creates common knowledge, in the you-know-I-know sense, that there is a fire; after which it is socially safe to react. When the fire alarm goes off, you know that everyone else knows there is a fire, you know you won’t lose face if you proceed to exit the building.

It’s now and then proposed that we ought to start reacting later to the issues of Artificial General Intelligence, because, it is said, we are so far away from it that it just isn’t possible to do productive work on it today. … the implicit alternative strategy on offer is: Wait for some unspecified future event that tells us AGI is coming near; and then we’ll all know that it’s okay to start working on AGI alignment.

This seems to me to be wrong on a number of grounds.

History shows that for the general public, and even for scientists not in a key inner circle, and even for scientists in that key circle, it is very often the case that key technological developments still seem decades away, five years before they show up. … Most of the world learned that atomic weapons were now a thing when they woke up to the headlines about Hiroshima.

Progress is driven by peak knowledge, not average knowledge.

The future uses different tools, and can therefore easily do things that are very hard now, or do with difficulty things that are impossible now.

When I observe that there’s no fire alarm for AGI, I’m not saying that there’s no possible equivalent of smoke appearing from under a door.

What I’m saying rather is that the smoke under the door is always going to be arguable; it is not going to be a clear and undeniable and absolute sign of fire; and so there is never going to be a fire alarm producing common knowledge that action is now due and socially acceptable.

There is never going to be a time before the end when you can look around nervously, and see that it is now clearly common knowledge that you can talk about AGI being imminent, and take action and exit the building in an orderly fashion, without fear of looking stupid or frightened.

By saying we’re probably going to be in roughly this epistemic state until almost the end, I don’t mean to say we know that AGI is imminent, or that there won’t be important new breakthroughs in AI in the intervening time. I mean that it’s hard to guess how many further insights are needed for AGI, or how long it will take to reach those insights. After the next breakthrough, we still won’t know how many more breakthroughs are needed, leaving us in pretty much the same epistemic state as before. Whatever discoveries and milestones come next, it will probably continue to be hard to guess how many further insights are needed, and timelines will continue to be similarly murky.

AlphaGo Zero and the Hanson-Yudkowsky AI-Foom Debate

Source: AlphaGo Zero and the Foom Debate, by Eliezer Yudkowsky

AlphaGo Zero uses 4 TPUs, is built entirely out of neural nets with no handcrafted features, doesn’t pretrain against expert games or anything else human, reaches a superhuman level after 3 days of self-play, and is the strongest version of AlphaGo yet.

The architecture has been simplified. Previous AlphaGo had a policy net that predicted good plays, and a value net that evaluated positions, both feeding into lookahead using MCTS (random probability-weighted plays out to the end of a game). AlphaGo Zero has one neural net that selects moves and this net is trained by Paul Christiano-style capability amplification, playing out games against itself to learn new probabilities for winning moves.

the mighty human edifice of Go knowledge, the joseki and tactics developed over centuries of play, the experts teaching children from an early age, was entirely discarded by AlphaGo Zero with a subsequent performance improvement.

 

Response: What Evidence Is AlphaGo Zero Re AGI Complexity?, by Robin Hanson

Over the history of computer science, we have developed many general tools with simple architectures and built from other general tools, tools that allow super human performance on many specific tasks scattered across a wide range of problem domains. For example, we have superhuman ways to sort lists, and linear regression allows superhuman prediction from simple general tools like matrix inversion. Yet the existence of a limited number of such tools has so far been far from sufficient to enable anything remotely close to human level AGI.

I’m treating it as the difference of learning N simple general tools to learning N+1 such tools. … I disagree with the claim that “this single simple tool gives a bigger advantage on a wider range of tasks than we have seen with previous tools.”

 

RE: The Hanson-Yudkowsky AI-Foom Debate

In late 2008, economist Robin Hanson and AI theorist Eliezer Yudkowsky conducted an online debate about the future of artificial intelligence, and in particular about whether generally intelligent AIs will be able to improve their own capabilities very quickly (a.k.a. “foom”).