Complicating the Narratives – The Whole Story

Source: Complicating the Narratives – The Whole Story

What if journalists covered controversial issues differently — based on how humans actually behave when they are polarized and suspicious? … The idea is to revive complexity in a time of false simplicity.

How did you come to have your political views?

Haidt identifies six moral foundations that form the basis of political thought: care, fairness, liberty, loyalty, authority and sanctity.

What is dividing us?
How should we decide?
How did you come to that?
What is oversimplified about this issue?
How has this conflict affected your life?
What do you think the other side wants?
What’s the question nobody is asking?

listen not just to what [people] say — but to their “gap words,” or the things that they don’t say.

listen for specific clues or “signposts,” which are usually symptoms of deeper, hidden meaning. Signposts include words like “always” or “never,” any sign of emotion, the use of metaphors, statements of identity, words that get repeated or any signs of confusion or ambiguity. When you hear one of these clues, identify it explicitly and ask for more.

double check — give the person a distillation of what you thought they meant and see what they say.

Did the Victorians have faster reactions? – Mind Hacks

Source: Did the Victorians have faster reactions? – Mind Hacks, by Tom Stafford
RE: Woodley, M. A, te Nijenhuis, J., & Murphy, R. (2015). The Victorians were still faster than us. Commentary: Factors influencing the latency of simple reaction time. Frontiers in human neuroscience, 9, 452.

measurements of “simple reaction times” (SRTs)

(Woodley et al, 2015, Figure 1, “Secular SRT slowing across four large, representative studies from the UK spanning a century. Bubble-size is proportional to sample size. Combined N = 6622.”)

Trump’s top economic adviser has ditched the Phillips curve—and it’s not crazy

Source: Trump’s top economic adviser has ditched the Phillips curve—and it’s not crazy

For decades, the world’s central bankers have almost lived and died by the Phillips curve, and it predicted inflation and wage growth reasonably well until the 1980s. Since then, however, the relationship between the factors it is meant to predict has been more complicated.

Some economists argue (paywall) that the ways in which we measure the variables at play, like wage growth, unemployment and inflation, need to change—not the underlying theory. But questioning the theory—and perhaps arguing against it—is no longer an arrestable offense.

Ways to think about machine learning

Source: Ways to think about machine learning, by Benedict Evans

[Thinking about the invention of relational databases] is a good grounding way to think about machine learning today – it’s a step change in what we can do with computers, and that will be part of many different products for many different companies. Eventually, pretty much everything will have ML somewhere inside and no-one will care.

An important parallel here is that though relational databases had economy of scale effects, there were limited network or ‘winner takes all’ effects.

with each wave of automation, we imagine we’re creating something anthropomorphic or something with general intelligence. In the 1920s and 30s we imagined steel men walking around factories holding hammers, and in the 1950s we imagined humanoid robots walking around the kitchen doing the housework. We didn’t get robot servants – we got washing machines.

Washing machines are robots, but they’re not ‘intelligent’. They don’t know what water or clothes are. Moreover, they’re not general purpose even in the narrow domain of washing … Equally, machine learning lets us solve classes of problem that computers could not usefully address before, but each of those problems will require a different implementation, and different data, a different route to market, and often a different company. Each of them is a piece of automation. Each of them is a washing machine.

one of my colleagues suggested that machine learning will be able to do anything you could train a dog to do, which is also a useful way to think about AI bias (What exactly has the dog learnt? What was in the training data? Are you sure? How do you ask?), but also limited because dogs do have general intelligence and common sense, unlike any neural network we know how to build. Andrew Ng has suggested that ML will be able to do anything you could do in less than one second. Talking about ML does tend to be a hunt for metaphors, but I prefer the metaphor that this gives you infinite interns, or, perhaps, infinite ten year olds.

In a sense, this is what automation always does; Excel didn’t give us artificial accountants, Photoshop and Indesign didn’t give us artificial graphic designers and indeed steam engines didn’t give us artificial horses. (In an earlier wave of ‘AI’, chess computers didn’t give us a grumpy middle-aged Russian in a box.) Rather, we automated one discrete task, at massive scale.