There are no free lunches, but organic lunches are super expensive: Why the tradeoffs constraining human cognition do not limit artificial superintelligences | Hypermagical Ultraomnipotence

Source: There are no free lunches, but organic lunches are super expensive: Why the tradeoffs constraining human cognition do not limit artificial superintelligences | Hypermagical Ultraomnipotence, by Anni Leskela

In this post, I argue against the brand of AI risk skepticism that is based on what we know about organic, biologically evolved intelligence and its constraints, recently promoted by Kevin Kelly on Wired and expanded by Erik Hoel in his blog.

below, “cognition” usually just refers to the skillsets related to predicting and influencing our actual world

If value alignment fails, we don’t know how competent an inhuman AI needs to be to reach existentially threatening powers

the [intelligence] growth rate doesn’t need to be literally exponential to pose an existential risk – with or without intentional treachery, we will still not be able to comprehend what’s going on after a while of recursive improvement, and roughly linear or irregular growth could still get faster than what we can keep track of. And … the eventual results could look rather explosive

a superintelligence doesn’t need to do human-style thinking to be dangerous

There are eventual constraints for intelligences implemented in silicon too, but it seems to me that these are unlikely to apply before they’re way ahead of us, because the materials and especially the algorithms and directions of a developing superintelligence are intentionally chosen and optimized for useful cognition, not for replicating in the primordial soup and proliferating in the organic world with weird restrictions such as metabolism and pathogens and communities of similar brains you need to cooperate with to get anything done.