From science fiction to science facts: understanding the technological singularity
Cybernetics, slime molds and the ultimate physical limits of AI
This short essay follows a previous one about the hype around emergent abilities in AI-powered systems, such as large language models. With the present one, I would like to discuss about another highly debated (potential) effect of emergent abilities in AI systems: the technological singularity.
It is a fascinating concept, triggering the attention of scientists and philosophers for about 7 decades. After defining what the technological singularity is, I will try to show why I think it is still a problem only in philosophical terms or in some abstract space where physical reality is not (yet?) playing a specific role.
It will be a quick journey through cybernetics, cognitive sciences, the ideas of Maturana & Varela and Clark & Chalmers, slime molds and evolutionary dynamics.
What is technological singularity?
This short video is interesting, and I share the take home message, but it requires some background knowledge to be understood: let’s dig into it.
In 1958, Stanislaw Ulam — a famous physicist and mathematician — reported a previous discussion with John von Neumann — one of the most important mathematicians of the 20th century and one of the pioneers of cybernetics — where he mentioned (likely for the first time?) the idea of singularity:
One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue. — Source: S. Ulam
A few years later, Irving Good proposed some speculative arguments about it, that can be summarized as follows:
assume that humans are able to create an AI “more intelligent” than any human
if such an AI is more intelligent than the humans who created it, then it will be able to design “even more intelligent” machines
the creation process can be iterated to generate “more and more ultra-intelligent” machines
In fact, Good proposes a self-improvement feedback loop that could result in an intelligence explosion where the AI becomes vastly more intelligent than humans, potentially within a short span of time. Consequently, our initial hypothetical AI would ignite a growth of technological progress at an unimaginable rate, leading to a singularity. The Internet and some old-fashioned magazines are plenty of speculative papers building on the top of this simple argument, and we are not going to explore them, since the underlying idea is the same.
Good argues that to design such an AI we first need to “understand more about the human brain or human thought or both” and he proposes to attack this problem by subassembly theory, a modified version of Hebb’s cell assembly theory. Remarkably, he wrote:
The subassembly theory sheds light on the physical embodiment of memory and meaning, and there can be little doubt that both needs embodiment in an ultra-intelligent machine. — Source: I. Good
thus recognizing the importance of physical embodiment of memory and meaning. This will be the subject of the next section, while here it is worth remarking an important missing point in Good’s argument (and successive ones): what does it mean to be “more intelligent”? Most of the speculative discussion about technological singularity give for granted that we perfectly know what an intelligent system is and how to quantify intelligence. It can be disappointing to many, but it seems that we are still far from that point.
Let us consider that, roughly and simply speaking, an intelligent system is characterized by a memory and by adaptive responses to its environment, e.g., by taking some form of decision to choose the best option among a set of alternatives. How would you react to discover that there are unicellular organisms without a nervous systems that satisfy the above requirements? A slime mold can solve a maze, externalize memory and discover optimal routes to connect points in space (under some conditions, of course) solving problems equivalent to the ones faced by engineers, like designing the Tokyo railway system. According to our standard definition of intelligence, the aforementioned behaviors make slime molds appear remarkably intelligent.
Brain, Mind and Environment
Good realized that embodiment might be a necessary ingredient to achieve an ultra-intelligence. It is interesting to relate this to a successive theory, proposed by Humberto Maturana and Francisco Varela in ‘70s-’80s and largely inspired by cybernetics and systems theory, where they argue that all living systems are autonomous, self-referring and self-constructing closed systems. They introduced the term autopoiesis to refer to such properties. For Maturana, cognition is a biological phenomenon and characterizes all living systems: life and cognition are deeply intertwined. Cognition cannot be limited to higher-order mental processes or thinking but is inherent in the very nature of any living systems.
At this point it becomes clear that embodiment and autopoiesis characterize living organisms, which are complex adaptive systems that are both embodied and autopoietic. The body of a living organism is characterized by a substrate that is also a dynamical system that continuously engage with the environment: such an interaction induces adaptation and, consequently, an operational way to maintain the underlying organization and sustain its own existence. It follows that embodiment can be seen as a manifestation of autopoiesis: our bodies — by means of their sub-systems needed to sense and interact with the environment — are central to the autopoietic processes that support our cognitive functions and intelligent behavior. Therefore, there is an irreducible connection between our physical being, the processes that sustain it, and our cognitive abilities. These arguments have been further developed by Varela and collaborators in the ‘90s to introduce enaction:
[…] cognition is not the representation of a pre-given world by a pre-given mind but is rather the enactment of a world and a mind on the basis of a history of the variety of actions that a being in the world performs
Thus, enaction emphasizes the inseparable relationship between an organism and its surroundings, suggesting that cognition emerges through the active engagement and coupling of that organism with its environment. Not only what one thinks matters: also what one does matters.
Therefore, cognition is not just a mental process and intelligence is not simply related to computational power: brain, mind and environment are intimately intertwined. Let’s summarize:
Ultra-intelligence requires embodiment: not only brain matters, but also physical structures and processes (Good, 1965)
Organisms are autopoietic, life and cognition are not separable (Maturana & Varela, 1980)
Organisms and their environment shape each other, cognition is (also) enaction (Varela, 1991)
The above ideas have provided the ground for the development of a modern approach to cognitive science, where embodiment and enaction are fundamental ingredients, together with embedding into an external environment. To those three ingredients, we should add the extension of an organism into its environment. This idea was proposed by Andy Clark and David Chalmers in 1998, in their famous “The extended mind” thesis: mind is not limited to the brain, it exists in the feedback loops with the environment. They acknowledge the importance of the brain in generating conscious experiences, but at the same time they argue that the mind extends beyond the boundaries of the brain and is influenced by factors beyond its physical structure.
Trivial examples include using a notebook to write down your ideas or a computer to write your book, as well as creating an album to collect your meaningful pictures. By using external devices, we are extending our cognition beyond the brain and its embodied structural and dynamical processes.
AI in 2023
At this point, the meaning of the short video above should become clear and we can go back to the AI hypothesized by Von Neumann and Good.
On the one hand, there is no evidence that there are conditions for disregarding more than half a century of developments in cognitive science. To the best of my limited knowledge we should still consider that life and cognition are entangled, and that cognition requires to be embedded, embodied, enacted and extended. Nevertheless, it is worth remarking that there are several other theories that build up the whole field of cognitive science. For instance, as alternatives to the aforementioned embodied cognition there are the computational theory of mind, where mind is seen as an information processing system — like a computing machine — and connectionism, emphasizing the emergence of cognitive processes from parallel operations distributed through interconnected networks of neurons. Many other theories have been proposed, although most of them do not directly relate to the emergence of consciousness in artificial settings.
On the other hand, the realization of technological singularity relies on the development of an ultra-intelligent machine, which, in turn:
could heavily rely on embodiment, as suggested by Good
is based on a notion of intelligence that cannot be quantified (we are neglecting here results from the Turing Test, since it does not provide a proof for thinking machines) and, consequently, falsified
assume infinite amount of physical resources that can be used
It is worth thinking that if we take into account the scientific and philosophical knowledge about intelligence and, more generally, cognitive processes, current AI-powered systems are still far from being intelligent or develop consciousness. Although it is worth stressing that the power reached by those systems in performing some activities is remarkable, to say the least:
they predict molecular phenotypes from DNA sequences alone better than the state of the art
they find foundational algorithms (such as sorting) that are faster than ours
they discover new ways to multiply tensors
they elaborate reliable mathematical proofs of theorems
they design novel quantum physics experiments
they are able to design novel lethal biochemical weapons
Note that qualitatively different architectures are involved in the studies listed above. It is interesting how some of these ones are planned to be integrated with large language models, such as in the Gemini project by Google.
Finally, let’s dig more into the point about physical resources. It is reasonable to assume (unless proven otherwise) that physical laws are at play, leading to a decrease in the rate at which we (or the AI) can design "more intelligent" machines over time. This fact has been acknowledged also by Chalmers, who wrote a philosophical analysis of the singularity in 2010:
Of course the laws of physics impose limitations here. If the currently accepted laws of relativity and quantum mechanics are correct—or even if energy is finite in a classical universe—then we cannot expect the principles above to be indefinitely extensible. But even with these physical limitations in place, the arguments give some reason to think that both speed and intelligence might be pushed to the limits of what is physically possible. — Source: D. Chalmers
What these physical limitations are is still unknown. However, in 2000, Seth Lloyd — a physicist working at MIT — proposed a way to calculate the ultimate physical limits to computation. For the ones thinking that AI does not need embodiement, enaction, embedding and extension to achieve the singularity, computation alone is sufficient to assume that Lloyd’s work can be applied to estimate the physical limits of the singularity. He imposed only the constraints due to known universal constants, such as the speed of light to exchange information, the quantum scale at which computation is performed, etc. Lloyds’ calculations consider very extreme physical settings, such as hypothetical laptops that are compressed down to their Schwarzschild radius without becoming a black hole. His conclusions are anyway remarkable:
If, as seems highly unlikely, it is possible to extrapolate the exponential progress of Moore's law into the future, then it will take only 250 years to make up the 40 orders of magnitude in performance between current computers that perform 1010 operations per second on 1010 bits and our 1-kg ultimate laptop that performs 1051 operations per second on 1031 bits.
A plausible future?
If you made up to this point, you deserve an alternative to the singularity hype.
Disruptive technologies have emerged throughout the whole history of humankind. To mention a few representative ones, consider: (i) the printing press, revolutionizing the spread of information and knowledge; (ii) the steam engine, transforming transportation, manufacturing, agriculture, …; (iii) the discovery of electricity, advancing communication, lighting, the development of electrical devices; (iv) the Internet, revolutionizing global communication, information sharing, and commerce.
In each case, humans “evolved” and adapted to the corresponding changes. It is worth remarking that, more recently, such changes became more frequent and some of them require adaptation times longer than what we think. Think, for instance, about online social media: they have transformed our way of communicating and, consequently, they act as catalysts for a variety of processes, from speeding up collective intelligence to widespread disinformation that can lead to societal unrest.
The current advances in AI are disruptive, similarly to the aforementioned technologies. However, looking at our past history, it is plausible to guess that humankind will evolve accordingly and adapt in a variety of possible ways. Nowadays, we are already using our smartphones and cloud services as physical and digital extension of ourselves: with respect to the ‘90s, we are somehow different. Likely we will continue to change and find a way to co-exist with intelligent or ultra-intelligent machines. However, this is a purely speculative (and rather optimistic) take, and does not necessarily disagree with potential dystopian futures.
In fact, another significant assumption about the pathway towards technological singularity is that each ultra-intelligent machine will design a “more intelligent” machine at increasing rate, following a law of accelerating returns:
Evolution applies positive feedback in that the more capable methods resulting from one stage of evolutionary progress are used to create the next stage. As a result, the rate of progress of an evolutionary process increases exponentially over time. — Source: R. Kurzweil
However, it might be simply the case that the increasing complexity of such ultra-intelligent machines will come with increasing costs in resources to maintain their functioning. In this case, a law of diminishing returns applies, where costs outweigh benefits and evolution stops. This idea is in agreement with the theory proposed by Joseph Tainter, an archeologist who applied it to understand the collapse of several ancient civilizations. Note that such outcomes might even be understood from a mathematical perspective, under some simplifying conditions and an adequate mapping. For instance, a simple variation of the famous logistic equation can lead to punctuated evolution and a rich basin of other potential dynamics.
This should not be confused with punctuated equilibrium proposed by Eldredge and Gould. Nevertheless, there are interesting points in this evolutionary theory that could be worth relating to the foreseen disruptive changes due to AI.
For another post, maybe.
Therefore, instead of technological singularity, I would bet on reaching an evolutionary stage in which our society becomes increasingly dependent on a technology that is ultimately unsustainable. This fact will force us to to either simplify our systems or face the potential collapse of our society.
Which is a less optimistic take, I agree.