Source: There’s No Fire Alarm for Artificial General Intelligence – Machine Intelligence Research Institute
What is the function of a fire alarm? One might think that the function of a fire alarm is to provide you with important evidence about a fire existing, allowing you to change your policy accordingly and exit the building. … [but] We don’t want to look panicky by being afraid of what isn’t an emergency, so we try to look calm while glancing out of the corners of our eyes to see how others are reacting, but of course they are also trying to look calm.
…
A fire alarm creates common knowledge, in the you-know-I-know sense, that there is a fire; after which it is socially safe to react. When the fire alarm goes off, you know that everyone else knows there is a fire, you know you won’t lose face if you proceed to exit the building.
It’s now and then proposed that we ought to start reacting later to the issues of Artificial General Intelligence, because, it is said, we are so far away from it that it just isn’t possible to do productive work on it today. … the implicit alternative strategy on offer is: Wait for some unspecified future event that tells us AGI is coming near; and then we’ll all know that it’s okay to start working on AGI alignment.
This seems to me to be wrong on a number of grounds.
History shows that for the general public, and even for scientists not in a key inner circle, and even for scientists in that key circle, it is very often the case that key technological developments still seem decades away, five years before they show up. … Most of the world learned that atomic weapons were now a thing when they woke up to the headlines about Hiroshima.
Progress is driven by peak knowledge, not average knowledge.
The future uses different tools, and can therefore easily do things that are very hard now, or do with difficulty things that are impossible now.
When I observe that there’s no fire alarm for AGI, I’m not saying that there’s no possible equivalent of smoke appearing from under a door.
What I’m saying rather is that the smoke under the door is always going to be arguable; it is not going to be a clear and undeniable and absolute sign of fire; and so there is never going to be a fire alarm producing common knowledge that action is now due and socially acceptable.
…
There is never going to be a time before the end when you can look around nervously, and see that it is now clearly common knowledge that you can talk about AGI being imminent, and take action and exit the building in an orderly fashion, without fear of looking stupid or frightened.
By saying we’re probably going to be in roughly this epistemic state until almost the end, I don’t mean to say we know that AGI is imminent, or that there won’t be important new breakthroughs in AI in the intervening time. I mean that it’s hard to guess how many further insights are needed for AGI, or how long it will take to reach those insights. After the next breakthrough, we still won’t know how many more breakthroughs are needed, leaving us in pretty much the same epistemic state as before. Whatever discoveries and milestones come next, it will probably continue to be hard to guess how many further insights are needed, and timelines will continue to be similarly murky.