Concept-Shaped Holes Can Be Impossible To Notice | Slate Star Codex

Source: Concept-Shaped Holes Can Be Impossible To Notice | Slate Star Codex

When I see other people making a big deal out of seemingly-minor problems, I’m in this weird superposition between thinking I’ve avoided them so easily I missed their existence, or fallen into them so thoroughly I’m like the fish who can’t see water.

And when I see other people struggling to understand seemingly-obvious concepts, I’m in this weird superposition between thinking I’m so far beyond them that I did it effortlessly, or so far beneath them that I haven’t even realized there’s a problem.

There are concepts nobody gets on the first reading, concepts you have to have explained to you again and again until finally one of the explanations clicks and you can reconstruct it out of loose pieces in your own head.

And there are concept-shaped holes you don’t notice that you have. You can talk to an anosmic person about smell for years on end, and they’re still not going to realize they’ve got a big hole where that concept should be. You can give high-school me an entire class about atomization, and he can ace the relevant test, and he’s still not going to know what atomization is.

Put these together, and you have cause for concern. If you learn about something, and it seems trivial and boring, but lots of other people think it’s interesting and important – well, it could be so far beneath you that you’d internalized all its lessons already. Or it could be so far beyond you that you’re not even thinking on the same level as the people who talk about it.

Burn The Programmer! – Charlie’s Diary

Source: Burn The Programmer! – Charlie’s Diary

RE: Arthur C Clarke’s Third Law, “Any sufficiently advanced technology is indistinguishable from magic.”

I don’t have to imprecate dark and terrible forces in order to use my PS4, unless you count Sony’s latest privacy policy. My lovely new iPad is famously intuitive, not a quality one would ascribe to The Lesser Key Of Solomon. But. … That’s not so much the case for the other technology I interact with.

So there’s this class of people in the world who can do incredible things – like, say, teaching a car to drive itself. Or indeed crafting a literal Magician’s Broom to clean their towers – I mean, apartments.

And they do this by immersing themselves in obscure, difficult learning that on the face of it makes no sense to the average person.

They can cause harm to people tens of thousands of miles away using weirdly-named incantations – like “WannaCry”.

They summon and control alien entities called “AIs”. They don’t always perfectly control those entities.

And they can amass unimaginable wealth and power by using these arcane skills.

What happens next?

As I watch 2017 unfold in all its craziness, I do start wondering whether the conversation should be less about robots, and more about straight-up magic. About a world which is increasingly splitting into those who can wield magic, those who can pay the magicians, and those who just use the things magic enables.

Business questions engineers should ask when interviewing at ML/AI companies

Source: Business questions engineers should ask when interviewing at ML/AI companies

But really the questions are broadly applicable.

  1. Why does anyone need this?
  2. How was this problem being solved before?
  3. How many users have you spoken to? What have you learned from them?
  4. How do you make money?
  5. How will you grow? How will anyone find out about you?
  6. How big is this market?
  7. What is defensible about the business?

Does Age Bring Wisdom? | Slate Star Codex

Source: Does Age Bring Wisdom? | Slate Star Codex

Wisdom seems like the accumulation of [high-level frames and heuristics that organize other concepts], or changes in higher-level heuristics you get once you’ve had enough of those. I look back on myself now vs. ten years ago and notice I’ve become more cynical, more mellow, and more prone to believing things are complicated. … All these seem like convincing insights. But most of them are in the direction of elite opinion. There’s an innocent explanation for this: intellectual elites are pretty wise, so as I grow wiser I converge to their position. But the non-innocent explanation is that I’m not getting wiser, I’m just getting better socialized. Maybe in medieval Europe, the older I grew, the more I would realize that the Pope was right about everything.

If I accept my intellectual changes as “gaining wisdom”, shouldn’t I also believe that old people are wiser than I am? …
I remember when I was twenty, I thought the only reason adults were less utopian than I was, was because of their hidebound rose-colored self-serving biases. Pretty big coincidence that I was wrong then, but I’m right about everyone older than me now.

It would be pretty awkward if everything we thought was “gaining wisdom with age” was just “brain receptors consistently functioning differently with age”. If we were to find that were true – and furthermore, that the young version was intact and the older version was just the result of some kind of decay or oxidation or something – could I trust those results? Intuitively, going back to earlier habits of mind would feel inherently regressive, like going back to drawing on the wall with crayons.

After Universal Basic Income, The Flood – Simon Sarris – Medium

Source: After Universal Basic Income, The Flood – Simon Sarris – Medium

What if we implement UBI and it makes everything worse?

Large systems have difficulty adapting quickly, or at all, and miss the nuance of local conditions. Large systems failing could fail millions or billions of people.

If your small hippie commune fails, you can always rejoin the capitalist hellscape, or whatever everybody did in the 80’s. On the other hand, if UBI has been running for 20 years and fails…

How do you make it flexible and easy to replace if it isn’t working, a few decades on? You don’t build a nuclear power plant (or even a dam) without a plan for what to do if it goes critical. Any serious UBI plan needs the same thing, a contingency for what to do if they run out of money, or cannot distribute the money, or need to somehow draw down and close doors.

The absence of contingency is a fatal design flaw. Top-down complexity has a cost. If UBI fails 10–30 years into the future we may have a non-trivial population percentage that has never done any work and suddenly needs to. Since any UBI program failure would mean something like “we ran out of money”, failure may be catastrophic for some communities which produced nothing and have no means of even trucking in subsistence food.

For grand schemes, good intentions are not enough. Contingency plans are a must and robust or anti-fragile plans are preferred.