The Dark Forest Theory of the Internet | Medium

Source: The Dark Forest Theory of the Internet | Medium, by Yancey Strickler

Imagine a dark forest at night. It’s deathly quiet. Nothing moves. Nothing stirs. This could lead one to assume that the forest is devoid of life. But of course, it’s not. The dark forest is full of life. It’s quiet because night is when the predators come out. To survive, the animals stay silent.

This is also what the internet is becoming: a dark forest. In response to the ads, the tracking, the trolling, the hype, and other predatory behaviors, we’re retreating to our dark forests of the internet, and away from the mainstream.

These are all spaces where depressurized conversation is possible because of their non-indexed, non-optimized, and non-gamified environments. The cultures of those spaces have more in common with the physical world than the internet.

Milestones for me and my family were left unshared beyond our internet dark forests, even though many more friends and members of our families would’ve been happy to hear about them. Not sharing was my choice, of course, and I didn’t question it. My alienation from the mainstream was their loss, not mine. But did this choice also deprive me of some greater reward?

It’s possible, I suppose, that a shift away from the mainstream internet and into the dark forests could permanently limit the mainstream’s influence. It could delegitimize it. In some ways that’s the story of the internet’s effect on broadcast television. But we forget how powerful television still is. And those of us building dark forests risk underestimating how powerful the mainstream channels will continue to be, and how minor our havens are compared to their immensity.

The influence of Facebook, Twitter, and others is enormous and not going away. There’s a reason why Russian military focused on these platforms when they wanted to manipulate public opinion: they have a real impact. The meaning and tone of these platforms changes with who uses them. What kind of bowling alley it is depends on who goes there.

Should a significant percentage of the population abandon these spaces, that will leave nearly as many eyeballs for those who are left to influence, and limit the influence of those who departed on the larger world they still live in.

What comes after “open source”, by Steve Klabnik

Source: What comes after “open source”, by Steve Klabnik

note that I seamlessly switched above from talking about what Free Software and Open Source are, to immediately talking about licenses. This is because these two things are effectively synonymous.

So why is it a problem that the concepts of free software and open source are intrinsically tied to licenses? It’s that the aims and goals of both of these movements are about distribution and therefore consumption, but what people care about most today is about the production of software.

Most developers don’t understand open source to be a particular license that certain software artifacts are in compliance with, but an attitude, an ideology. And that ideology isn’t just about the consumption of the software, but also its production.

I’m still, ultimately, left with more questions than answers. But I do think I’ve properly identified the problem: many developers conceive of software freedom as something larger than purely a license that kinds in on redistribution. This is the new frontier for those who are thinking about furthering the goals of the free software and open source movements. Our old tools are inadequate, and I’m not sure that the needed replacements work, or even exist.

Viral Outrage Is Collapsing Our Worlds | The Atlantic

Source: Viral Outrage Is Collapsing Our Worlds | The Atlantic, by Conor Friedersdorf

The ability to slip into a domain and adopt whatever values and norms are appropriate while retaining identities in other domains is something most Americans value, both to live in peace amid difference and for personal reasons.

I wonder whether ongoing debates about matters as varied as Facebook user-data practices, “the right to be forgotten,” NSA data collection, and any number of public-shaming controversies are usefully considered under the umbrella framework of How is new technology affecting our ability to keep our various worlds from colliding when we don’t want them to, and what, if anything, should we do about that?

What would the implications be of adopting the norm that it is often wrong, or only rarely appropriate, to rob an individual of the ability to slip into a given domain and adopt whatever values and norms are appropriate while retaining their identities in other domains?

What would be the worst consequences? How might we shift the cultural equilibrium to value domain-slipping more highly while recognizing its practical and moral limits? What tradeoffs are involved?

The digital revolution isn’t over but has turned into something else | Edge

Source: The digital revolution isn’t over but has turned into something else | Edge, by George Dyson

Once it was simple: programmers wrote the instructions that were supplied to the machines. Since the machines were controlled by these instructions, those who wrote the instructions controlled the machines.

We imagine that individuals, or individual algorithms, are still behind the curtain somewhere, in control. We are fooling ourselves.

Nature uses digital coding for the storage, replication, recombination, and error correction of sequences of nucleotides, but relies on analog coding and analog computing for intelligence and control.

Digital computers deal with integers, binary sequences, deterministic logic, algorithms, and time that is idealized into discrete increments. Analog computers deal with real numbers, non-deterministic logic, and continuous functions, including time as it exists as a continuum in the real world. … Digital computing, intolerant of error or ambiguity, depends upon precise definitions and error correction at every step. Analog computing not only tolerates errors and ambiguities, but thrives on them. Digital computers, in a technical sense, are analog computers, so hardened against noise that they have lost their immunity to it. Analog computers embrace noise; a real-world neural network needing a certain level of noise to work.

Nature’s answer to those who sought to control nature through programmable machines is to allow us to build machines whose nature is beyond programmable control.

The Information World War

Source: The Digital Maginot Line | ribbonfarm, by Renee DiResta

The Information World War has already been going on for several years. We called the opening skirmishes “media manipulation” and “hoaxes”, assuming that we were dealing with ideological pranksters doing it for the lulz (and that lulz were harmless).

In reality, the combatants are professional, state-employed cyberwarriors and seasoned amateur guerrillas pursuing very well-defined objectives with military precision and specialized tools.

Combatants evolve with remarkable speed, because digital munitions are very close to free. In fact, because of the digital advertising ecosystem, information warfare may even turn a profit. There’s very little incentive not to try everything: this is a revolution that is being A/B tested. The most visible battlespaces are our online forums — Twitter, Facebook, and YouTube — but the activity is increasingly spreading to old-school direct action on the streets, in traditional media outlets, and behind closed doors, as state-sponsored trolls recruit and manipulate activists, launder narratives, and instigate protests.

The combatants want to normalize the idea that the platforms shouldn’t be allowed to set rules of engagement because in the short term, it’s only the platforms that can.

Meanwhile, regular civilian users view these platforms as ordinary extensions of physical public and social spaces – the new public square, with a bit of a pollution problem. Academic leaders and technologists wonder if faster fact checking might solve the problem, and attempt to engage in good-faith debate about whether moderation is censorship. There’s a fundamental disconnect here, driven by underestimation and misinterpretation. The combatants view this as a Hobbesian information war of all against all and a tactical arms race; the other side sees it as a peacetime civil governance problem.

ultimately the information war is about territory — just not the geographic kind. In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.

The 2014-2016 influence operation playbook went something like this: a group of digital combatants decided to push a specific narrative, something that fit a long-term narrative but also had a short-term news hook. They created content … then activated collections of bots and sockpuppets

Since running spammy automated accounts is no longer a good use of resources, sophisticated operators have moved on to new tactics. … Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead.

The entities best suited to mitigate the threat of any given emerging tactic will always be the platforms themselves, because they can move fast when so inclined or incentivized. The problem is that many of the mitigation strategies advanced by the platforms are the information integrity version of greenwashing; they’re a kind of digital security theater

The key problem is this: platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors.

Platforms cannot continue to operate as if all users are basically the same; they have to develop constant awareness of how various combatant types will abuse the new features that they roll out, and build detection of combatant tactics into the technology they’re creating to police the problem. … They must recognize that they are battlespaces, and as such, must build the policing capabilities that limit the actions of malicious combatants while protecting the actual rights of their real civilian users.

AI-generated audio and video deepfakes will erode trust in what we see with our own eyes, leaving us vulnerable both to faked content and to the discrediting of the actual truth by insinuation. Authenticity debates will commandeer media cycles, pushing us into an infinite loop of perpetually investigating basic facts. Chronic skepticism and the cognitive DDoS will increase polarization, leading to a consolidation of trust in distinct sets of right and left-wing authority figures – thought oligarchs speaking to entirely separate groups.

An admirable commitment to the principle of free speech in peace time turns into a sucker position against adversarial psy-ops in wartime. We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.

We have to move away from treating this as a problem of giving people better facts, or stopping some Russian bots, and move towards thinking about it as an ongoing battle for the integrity of our information infrastructure

More: Common-Knowledge Attacks on Democracy, by Henry John Farrell and Bruce Schneier