Shoshana Zuboff Explains the Age of Surveillance Capitalism | The Intercept

Source: “A Fundamentally Illegitimate Choice”: Shoshana Zuboff on the Age of Surveillance Capitalism | The Intercept, by Sam Biddle

RE: The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, by Shoshana Zuboff

The cliched refrain that if you’re “not paying for a product, you are the product”? Too weak, says Zuboff. You’re not technically the product, she explains over the course of several hundred tense pages, because you’re something even more degrading: an input for the real product, predictions about your future sold to the highest bidder so that this future can be altered.

Now we have markets of business customers that are selling and buying predictions of human futures. I believe in the values of human freedom and human autonomy as the necessary elements of a democratic society. As the competition of these prediction products heats up, it’s clear that surveillance capitalists have discovered that the most predictive sources of data are when they come in and intervene in our lives, in our real-time actions, to shape our action in a certain direction that aligns with the kind of outcomes they want to guarantee to their customers. That’s where they’re making their money. These are bald-faced interventions in the exercise of human autonomy, what I call the “right to the future tense.” The very idea that I can decide what I want my future to be and design the actions that get me from here to there, that’s the very material essence of the idea of free will.

to the extent that we do need help and we do look to the internet, it is a fundamentally illegitimate choice that we are now forced to make as 21st century citizens. In order to get the help I need, I’ve got to march through surveillance capitalism supply chains. Because Alexa and Google Home and every other gewgaw that has the word “smart” in front of it, every service that has “personalized” in front of it is nothing but supply chain interfaces for the flow of raw material to be translated into data, to be fashioned into prediction products, to be sold in behavioral futures markets so that we end up funding our own domination. If we’re gonna fix this, no matter how much we feel like we need this stuff, we’ve got to get to a place where we are willing to say no.

The Dark Forest Theory of the Internet | Medium

Source: The Dark Forest Theory of the Internet | Medium, by Yancey Strickler

Imagine a dark forest at night. It’s deathly quiet. Nothing moves. Nothing stirs. This could lead one to assume that the forest is devoid of life. But of course, it’s not. The dark forest is full of life. It’s quiet because night is when the predators come out. To survive, the animals stay silent.

This is also what the internet is becoming: a dark forest. In response to the ads, the tracking, the trolling, the hype, and other predatory behaviors, we’re retreating to our dark forests of the internet, and away from the mainstream.

These are all spaces where depressurized conversation is possible because of their non-indexed, non-optimized, and non-gamified environments. The cultures of those spaces have more in common with the physical world than the internet.

Milestones for me and my family were left unshared beyond our internet dark forests, even though many more friends and members of our families would’ve been happy to hear about them. Not sharing was my choice, of course, and I didn’t question it. My alienation from the mainstream was their loss, not mine. But did this choice also deprive me of some greater reward?

It’s possible, I suppose, that a shift away from the mainstream internet and into the dark forests could permanently limit the mainstream’s influence. It could delegitimize it. In some ways that’s the story of the internet’s effect on broadcast television. But we forget how powerful television still is. And those of us building dark forests risk underestimating how powerful the mainstream channels will continue to be, and how minor our havens are compared to their immensity.

The influence of Facebook, Twitter, and others is enormous and not going away. There’s a reason why Russian military focused on these platforms when they wanted to manipulate public opinion: they have a real impact. The meaning and tone of these platforms changes with who uses them. What kind of bowling alley it is depends on who goes there.

Should a significant percentage of the population abandon these spaces, that will leave nearly as many eyeballs for those who are left to influence, and limit the influence of those who departed on the larger world they still live in.

How to do hard things, by David R. MacIver

Source: How to do hard things, by David R. MacIver

“The Fully General System For Learning To Do Hard Things”. It’s a useful conceptual framework for how to get better at things that you currently find difficult. … The goal of the system is not to save you work, it’s to ensure that the work you do is useful.

The Single-Loop System

When you know what success looks like but cannot currently achieve it, the system works as follows:

  1. Find something that is like the hard thing but is easy.
  2. Modify the easy thing so that it is like the hard thing in exactly one way that you find hard.
  3. Do the modified thing until it is no longer hard.
  4. If you get stuck, do one of the following:
    1. Go back to step 3 and pick a different way in which the problem is hard.
    2. Recursively apply the general system for learning to do hard things to the thing you’re stuck on.
    3. Go ask an expert or a rubber duck for advice.
    4. If you’re still stuck after trying the first three, it’s possible that you may have hit some sort of natural difficulty limit and may not be able to make progress.
  5. If the original hard thing is now easy, you’re done. If not, go back to step 2.

The reason this works much better than just practicing the hard thing is because it gives you a much more direct feedback loop. There is exactly one aspect of the problem at any time that you are trying to get better at, and you can focus on that aspect to the exclusion of all else. When you are practicing something that is difficult in multiple ways, you will be bad at it in all of those ways. More, you will be worse at it in all of those ways than you would be if you’d tried them on their own. Additionally, when you fail you have to do a complicated root cause analysis to figure out why.

Instead, by isolating one aspect of the problem that is difficult, you will fairly rapidly improve, or hit the limits of your ability.

The Double-Loop System

If you don’t know what success looks like, you need to do double loop learning, where you mix improving your understanding of the problem with your ability to execute the solution.

  1. Apply the single loop system to the problem of improving your understanding of the problem space (e.g. consume lots of examples and learn to distinguish good from bad) in order to acquire a sense of good taste.
  2. Apply the single loop system to the problem of doing well according to your own sense of good taste.
  3. Get feedback on the result from others. Do they think you did it well? If yes, great! You’re good at the thing. If no, either improve your sense of taste or theirs. If you choose yours, go back to step 1 with the new example. If you choose theirs, apply the single loop system to the hard problem of convincing others that your thing is good.

Privacy Rights and Data Collection in a Digital Economy

Source: U.S. Senate Testimony | Idle Words, by Maciej Cegłowski

RE: U.S. Senate hearing on “Privacy Rights and Data Collection in a Digital Economy.”, by The Committee on Banking, Housing, and Urban Affairs

The sudden ubiquity of this architecture of mass surveillance, and its enshrinement as the default business model of the online economy, mean that we can no longer put off hard conversations about the threats it poses to liberty.

Adding to this urgency is the empirical fact that, while our online economy depends on the collection and permanent storage of highly personal data, we do not have the capacity to keep such large collections of user data safe over time.

While many individual data breaches are due to negligence or poor practices, their overall number reflects an uncomfortable truth well known to computer professionals—that our ability to attack computer systems far exceeds our ability to defend them, and will for the foreseeable future.

In the regulatory context, discussion of privacy invariably means data privacy—the idea of protecting designated sensitive material from unauthorized access.

It is true that, when it comes to protecting specific collections of data, the companies that profit most from the surveillance economy are the ones working hardest to defend them against unauthorized access.

But there is a second, more fundamental sense of the word privacy, one which until recently was so common and unremarkable that it would have made no sense to try to describe it.

That is the idea that there exists a sphere of life that should remain outside public scrutiny, in which we can be sure that our words, actions, thoughts and feelings are not being indelibly recorded. This includes not only intimate spaces like the home, but also the many semi-private places where people gather and engage with one another in the common activities of daily life—the workplace, church, club or union hall.

The tension between these interpretations of what privacy entails, and who is trying to defend it, complicates attempts to discuss regulation.

Tech companies will correctly point out that their customers have willingly traded their private data for an almost miraculous collection of useful services, services that have unquestionably made their lives better, and that the business model that allows them to offer these services for free creates far more value than harm for their customers.

Consumers will just as rightly point out that they never consented to be the subjects in an uncontrolled social experiment, that the companies engaged in reshaping our world have consistently refused to honestly discuss their business models or data collection practices, and that in a democratic society, profound social change requires consensus and accountability.

While it is too soon to draw definitive conclusions about the GDPR, there is a tension between its concept of user consent and the reality of a surveillance economy that is worth examining in more detail.

A key assumption of the consent model is any user can choose to withhold consent from online services. But not all services are created equal—there are some that you really can’t say no to.

The latent potential of the surveillance economy as a toolkit for despotism cannot be exaggerated. The monitoring tools we see in repressive regimes are not ‘dual use’ technologies—they are single use technologies, working as designed, except for a different master.

 

Also: Think You’re Discreet Online? Think Again | Opinion | The New York Times, by Zeynep Tufekci

Because of technological advances and the sheer amount of data now available about billions of other people, discretion no longer suffices to protect your privacy. Computer algorithms and network analyses can now infer, with a sufficiently high degree of accuracy, a wide range of things about you that you may have never disclosed, including your moods, your political beliefs, your sexual orientation and your health. There is no longer such a thing as individually “opting out” of our privacy-compromised world.

Such tools are already being marketed for use in hiring employees, for detecting shoppers’ moods and predicting criminal behavior. Unless they are properly regulated, in the near future we could be hired, fired, granted or denied insurance, accepted to or rejected from college, rented housing and extended or denied credit based on facts that are inferred about us.

This is worrisome enough when it involves correct inferences. But because computational inference is a statistical technique, it also often gets things wrong — and it is hard, and perhaps impossible, to pinpoint the source of the error, for these algorithms offer little to no insights into how they operate.