Cognition all the way down | Aeon

Source: Cognition all the way down | Aeon, by Michael Levin and Daniel C Dennett, edited by Nigel Warburton

Biology’s next great horizon is to understand cells, tissues and organisms as agents with agendas (even if unthinking ones)

Isaac Newton’s laws are great for predicting the path of a ball placed at the top of a hill, but they’re useless for understanding what a mouse at the top of a hill will do. So, the other way to make a mistake is to fail to attribute goal-directedness to a system that has it; this kind of teleophobia significantly holds back the ability to predict and control complex systems because it prevents discovery of their most efficient internal controls or pressure points.

In a phrase that will need careful unpacking, individual cells are not just building blocks, like the basic parts of a ratchet or pump; they have extra competences that turn them into (unthinking) agents that, thanks to information they have on board, can assist in their own assembly into larger structures, and in other large-scale projects that they needn’t understand.

Agents, in this carefully limited perspective, need not be conscious, need not understand, need not have minds, but they do need to be structured to exploit physical regularities that enable them to use information (following the laws of computation) to perform tasks, beginning with the fundamental task of self-preservation, which involves not just providing themselves with the energy needed to wield their tools, but the ability to adjust to their local environments in ways that advance their prospects.

the point is not to anthropomorphise morphogenesis – the point is to naturalise cognition. There is nothing magic that humans (or other smart animals) do that doesn’t have a phylogenetic history. Taking evolution seriously means asking what cognition looked like all the way back. Modern data in the field of basal cognition makes it impossible to maintain an artificial dichotomy of ‘real’ and ‘as-if’ cognition. There is one continuum along which all living systems (and many nonliving ones) can be placed, with respect to how much thinking they can do.

It’s all about goals: single cells’ homeostatic goals are roughly the size of one cell, and have limited memory and anticipation capacity. Tissues, organs, brains, animals and swarms (like anthills) form various kinds of minds that can represent, remember and reach for bigger goals. This conceptual scheme enables us to look past irrelevant details of the materials or backstory of their construction, and to focus on what’s important for being a cognitive agent with some degree of sophistication: the scale of its goals. Agents can combine into networks, scaling their tiny, local goals into more grandiose ones belonging to a larger, unified self. And of course, any cognitive agent can be made up of smaller agents, each with their own limits on the size and complexity of what they’re working towards.