The illusion of certainty, by Rory Sutherland

Source: The illusion of certainty, by Rory Sutherland

at stake is the difference between deterministic and probabilistic improvement. If you engage engineers, you don’t know what you are going to get. You may be unlucky and get nothing. Or their solution may be so outlandish that it is hard to compare with other competing solutions. On average, though, what you get will be more valuable than the gains produced by some tedious restructuring enshrined in a fat PowerPoint deck.

But in business, let alone in government, it is only in crises that people find a budget for probabilistic interventions of this kind (in peacetime, nobody would have given Barnes Wallis the time of day). The reason is that both bureaucrats and business people are heavily attracted to the illusion of certainty. Standard cost-cutting ‘efficiencies’ can usually be ‘proven’ to work in advance; more interesting lines of enquiry come with career-threatening unknowability.

One problem with this pretense of certainty is that cost-savings are more easily quantified than potential gains

for a long time, the ratio between ‘explore’ and ‘exploit’ has been badly out of whack.

use [an] ‘evidence-based’ data-model up to a point, but correct for the fact that it is incomplete, temporary and weighted to the past. Institutionalised humans obtain a false sense of certainty by assuming … that what is optimal in a one-off transaction in a certain present is also optimal at scale, in an uncertain, long-term future.

We’re Banning Facial Recognition. We’re Missing the Point. | The New York Times | Opinion

Source: We’re Banning Facial Recognition. We’re Missing the Point. | The New York Times | Opinion, by Bruce Schneier

The whole point of modern surveillance is to treat people differently, and facial recognition technologies are only a small part of that.

In all cases, modern mass surveillance has three broad components: identification, correlation and discrimination.

Facial recognition is a technology that can be used to identify people without their knowledge or consent. … But that’s just one identification technology among many. People can be identified at a distance by their heart beat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and iris patterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses. Other things identify us as well: our phone numbers, our credit card numbers, the license plates on our cars.

Once we are identified, the data about who we are and what we are doing can be correlated with other data collected at other times. … It can be purchasing data, internet browsing data, or data about who we talk to via email or text. It might be data about our income, ethnicity, lifestyle, profession and interests. There is an entire industry of data brokers
who make a living analyzing and augmenting data about who we are — using surveillance data collected by all sorts of companies and then sold without our knowledge or consent.

The whole purpose of this process is for companies — and governments — to treat individuals differently.

Regulating this system means addressing all three steps of the process… The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible. Similarly, we need rules about how our data can be combined with other data, and then bought and sold without our knowledge or consent. … Finally, we need better rules about when and how it is permissible for companies to discriminate.

Today, facial recognition technologies are receiving the brunt of the tech backlash, but focusing on them misses the point. We need to have a serious conversation about all the technologies of identification, correlation and discrimination, and decide how much we as a society want to be spied on by governments and corporations — and what sorts of influence we want them to have over our lives.

Artificial Personas and Public Discourse | Schneier on Security

Source: Artificial Personas and Public Discourse | Schneier on Security, by Bruce Schneier

it’s time to confront the weird and insidious ways in which technology is warping politics. One of the biggest threats on the horizon: artificial personas are coming, and they’re poised to take over political debate. The risk arises from two separate threads coming together: artificial intelligence-driven text generation and social media chatbots. These computer-generated “people” will drown out actual human discussions on the Internet.

Text-generation software is already good enough to fool most people most of the time. It’s writing news stories, particularly in sports and finance. It’s talking with customers on merchant websites.

Over the years, algorithmic bots have evolved to have personas. They have fake names, fake bios, and fake photos — sometimes generated by AI. Instead of endlessly spewing propaganda, they post only occasionally.

Combine these two trends and you have the recipe for nonhuman chatter to overwhelm actual political speech.

About a fifth of all tweets about the 2016 presidential election were published by bots, according to one estimate, as were about a third of all tweets about that year’s Brexit vote. An Oxford Internet Institute report from last year found evidence of bots being used to spread propaganda in 50 countries.

In 2017, the Federal Communications Commission had an online public-commenting period for its plans to repeal net neutrality. A staggering 22 million comments were received. Many of them — maybe half — were fake, using stolen identities.

The most important lesson from the 2016 election about misinformation isn’t that misinformation occurred; it is how cheap and easy misinforming people was. … Our future will consist of boisterous political debate, mostly bots arguing with other bots. This is not what we think of when we laud the marketplace of ideas, or any democratic political process. Democracy requires two things to function properly: information and agency. Artificial personas can starve people of both.

The case for … cities that aren’t dystopian surveillance states | Cory Doctorow

Source: The case for … cities that aren’t dystopian surveillance states | The Guardian, by Cory Doctorow

Imagine your smartphone knew everything about the city – but the city didn’t know anything about you.

Why isn’t it creepy for you to know when the next bus is due, but it is creepy for the bus company to know that you’re waiting for a bus? It all comes down to whether you are a sensor – or a thing to be sensed.

homes were sensing and actuating long before the “internet of things” emerged. Thermostats, light switches, humidifiers, combi boilers … our homes are stuffed full of automated tools that no one thinks to call “smart,” largely because they aren’t terrible enough to earn the appellation.

Instead, these were oriented around serving us, rather than observing or controlling us… In your home, you are not a thing, you are a person, and the things around you exist for your comfort and benefit, not the other way around.

Shouldn’t it be that way in our cities?

As is so often the case with technology, the most important consideration isn’t what the technology does: it’s who the technology does it to, and who it does it for. The sizzle reel for a smart city always involves a cut to the control room, where the wise, cool-headed technocrats get a god’s-eye view over the city they’ve instrumented and kitted out with electronic ways of reaching into the world and rearranging its furniture.

It’s a safe bet that the people who make those videos imagine themselves as one of the controllers watching the monitors – not as one of the plebs whose movements are being fed to the cameras that feed the monitors. It’s a safe bet that most of us would like that kind of god’s-eye view into our cities, and with a little tweaking, we could have it.

This is an example of how a smart city could work: a place through which you move in relative anonymity, identified only when needed, and under conditions that allow for significant controls over what can be done with your data.

If it sounds utopian, it’s only because of how far we have come from the idea of a city being designed to serve its demos, rather than its lordly masters. We must recover that idea. As a professional cyberpunk dystopian writer, I’m here to tell you that our ideas were intended as warnings, not suggestions.

The Value of Grey Thinking | Farnam Street

Source: The Value of Grey Thinking | Farnam Street

Reality is all grey area. All of it. There are very few black and white answers and no solutions without second-order consequences.

It’s only once you can begin divorcing yourself from good-and-bad, black-and-white, category X&Y type thinking that your understanding of reality starts to fit together properly. Putting things on a continuum, assessing the scale of their importance and quantifying their effects, understanding both the good and the bad, is the way to do it. Understanding the other side of the argument better than your own, a theme we hammer on ad nauseum, is the way to do it. Because truth always lies somewhere in between, and the discomfort of being uncertain is preferable to the certainty of being wrong.

quantitative thinking isn’t really about math; it’s about the idea that The dose makes the poison. … Nearly all things are OK in some dose but not OK in another dose. That is the way of the world, and why almost everything connected to practical reality must be quantified, at least roughly.