If you find value in #ComplexityThoughts, consider helping it grow by sharing it with friends, colleagues or on social media. Your support makes a real difference.
→ Don’t miss the podcast version of this post: click on “Spotify/Apple Podcast” above!
Foundations of network science and complex systems
Time-lagged recurrence: A data-driven method to estimate the predictability of dynamical systems
This paper presents a smart and practical way to measure how predictable complex systems are, depending on their current state and scale: no need to know the exact equations behind them. That does not make it “model-free”, as you might learn from these two older posts:
I'm especially happy to have edited its publication, as it opens up new possibilities for exploring predictability in everything from simple models to real-world climate systems.
Nonlinear dynamical systems are ubiquitous in nature and they are hard to forecast. Not only they may be sensitive to small perturbations in their initial conditions, but they are often composed of processes acting at multiple scales. Classical approaches based on the Lyapunov spectrum rely on the knowledge of the dynamic forward operator, or of a data-derived approximation of it. This operator is typically unknown, or the data are too noisy to derive its faithful representation. Here, we propose a data-driven approach to analyze the local predictability of dynamical systems. This method, based on the concept of recurrence, is closely linked to the well-established framework of local dynamical indices. When applied to both idealized systems and real-world datasets arising from large-scale atmospheric fields, our approach proves its effectiveness in estimating local predictability. Additionally, we discuss its relationship with other local dynamical indices, and how it reveals the scale-dependent nature of predictability. Furthermore, we explore its link to information theory, its extension that includes a weighting strategy, and its real-time application. We believe these aspects collectively demonstrate its potential as a powerful diagnostic tool for complex systems.
Evolution
Recently, PNAS has published a special issue on “Collective Artificial Intelligence and Evolutionary Dynamics” which would deserve a dedicated #ComplexityThoughts issue.
I have enjoyed reading it, but I cannot cover every single paper. I have focused my attention on the ones that are explicitly linked to complex systems.
Collective artificial intelligence and evolutionary dynamics
Collective behavior is ubiquitous and highly structured in the natural world, allowing individuals to coordinate and cooperate in pursuit of common aims. The field of evolutionary game theory helps explain how structured collective behavior emerges in humans and other animals. But results from evolutionary game theory are typically restricted to simple and stylized problems. To be sure, simple models have been incredibly useful for understanding natural systems, developing hypotheses, and designing experiments to test hypotheses. At the same time, the field of multiagent research in AI has recently seen explosive growth, allowing researchers to model collective behavior in very complex domains using agents trained with reinforcement learning. There is a broad qualitative similarity between the problems addressed in these two fields, so that bridging them may provide theoretical guarantees about algorithms in reinforcement learning while extending the reach of evolutionary game theory. Synergy between these fields should help us to understand and reliably engineer collective behavior in complex domains.
Collective cooperative intelligence
[…] large collectives complicate the emergence and robustness of cooperation […] a key challenge for future research remains identifying principles of collective information processing and collective action that offer robust pathways to cooperation in large groups and across scales of organization
Cooperation at scale is critical for achieving a sustainable future for humanity. However, achieving collective, cooperative behavior—in which intelligent actors in complex environments jointly improve their well-being—remains poorly understood. Complex systems science (CSS) provides a rich understanding of collective phenomena, the evolution of cooperation, and the institutions that can sustain both. Yet, much of the theory in this area fails to fully consider individual-level complexity and environmental context—largely for the sake of tractability and because it has not been clear how to do so rigorously. These elements are well captured in multiagent reinforcement learning (MARL), which has recently put focus on cooperative (artificial) intelligence. However, typical MARL simulations can be computationally expensive and challenging to interpret. In this perspective, we propose that bridging CSS and MARL affords new directions forward. Both fields can complement each other in their goals, methods, and scope. MARL offers CSS concrete ways to formalize cognitive processes in dynamic environments. CSS offers MARL improved qualitative insight into emergent collective phenomena. We see this approach as providing the necessary foundations for a proper science of collective, cooperative intelligence. We highlight work that is already heading in this direction and discuss concrete steps for future research.
Heterogeneity, reinforcement learning, and chaos in population games
The emergence of chaotic behavior through coupled learning dynamics is a problem of fundamental importance that lies at the intersection of diverse fields such as social sciences, complexity science, mathematics, and artificial intelligence. Here, we provably show such a phenomenon in a setting where a large and diverse population of agents with strongly aligned interests learn concurrently. The collective dynamics can be provably chaotic, destabilizing the socially optimal equilibria and resulting in performance losses for all individuals and the society as a whole. Driving these results is a population-wide ergodic convergence where the time-average of the population-average behavior provably converges to its unique equilibrium value, despite the fact that the time-average behavior of any single agent may not converge.
Inspired by the challenges at the intersection of Evolutionary Game Theory and Machine Learning, we investigate a class of discrete-time multiagent reinforcement learning (MARL) dynamics in population/nonatomic congestion games, where agents have diverse beliefs and learn at different rates. These congestion games, a well-studied class of potential games, are characterized by individual agents having negligible effects on system performance, strongly aligned incentives, and well-understood advantageous properties of Nash equilibria. Despite the presence of static Nash equilibria, we demonstrate that MARL dynamics with heterogeneous learning rates can deviate from these equilibria, exhibiting instability and even chaotic behavior and resulting in increased social costs. Remarkably, even within these chaotic regimes, we show that the time-averaged macroscopic behavior converges to exact Nash equilibria, thus linking the microscopic dynamic complexity with traditional equilibrium concepts. By employing dynamical systems techniques, we analyze the interaction between individual-level adaptation and population-level outcomes, paving the way for studying heterogeneous learning dynamics in discrete time across more complex game scenarios.
Tabula rasa agents display emergent in-group behavior
Theories on group-bias often posit an internal preparedness to bias one’s cognition to favor the in-group (often envisioned as a product of evolution). In contrast, other theories suggest that group-biases can emerge from nonspecialized cognitive processes. These perspectives have historically been difficult to disambiguate given that observed behavior can often be attributed to innate processes, even when groups are experimentally assigned. Here, we use modern techniques from the field of AI that allow us to ask what group biases can be expected from a learning agent that is a pure blank slate without any intrinsic social biases, and whose lifetime of experiences can be tightly controlled. This is possible because deep reinforcement-learning agents learn to convert raw sensory input (i.e. pixels) to reward-driven action, a unique feature among cognitive models. We find that blank slate agents do develop group biases based on arbitrary group differences (i.e. color). We show that the bias develops as a result of familiarity of experience and depends on the visual patterns becoming associated with reward through interaction. The bias artificial agents display is not a static reflection of the bias in their stream of experiences. In this minimal environment, the bias can be overcome given enough positive experiences, although unlearning the bias takes longer than acquiring it. Further, we show how this style of tabula rasa group behavior model can be used to test fine-grained predictions of psychological theories.
Picking strategies in games of cooperation
Evolutionary game theory (EGT) has been pivotal in the study of cooperation, offering formal models that account for how cooperation may arise in groups of selfish, but simple agents. This is done by inspecting the complex dynamics arising from simple interactions between a few strategies in a large population. As such, the strategies at stake are typically hand-picked by the modeler, resulting in a system with many more individuals in the population than strategies available to them. In the presence of noise and with multiple equilibria, the choice of strategies can considerably alter the emergent dynamics. As a result, model outcomes may not be robust to how the strategy set is chosen, sometimes misrepresenting the conditions required for cooperation to emerge. We propose three principles that can lead to a more systematic choice of the strategies in EGT models of cooperation. These are the inclusion of all computationally equivalent strategies; explicit microeconomic models of interactions, and a connection between stylized facts and model assumptions. Further, we argue that new methods arising in AI may offer a promising path toward richer models. These richer models can push the field of cooperation forward together with the principles described above. At the same time, AI may benefit from connecting to the more abstract models of EGT. We provide and discuss examples to substantiate these claims.
Patterns and Evolutionary Consequences of Pleiotropy
Pleiotropy refers to the phenomenon of one gene or one mutation affecting multiple phenotypic traits. While the concept of pleiotropy is as old as Mendelian genetics, functional genomics has finally allowed the first glimpses of the extent of pleiotropy for a large fraction of genes in a genome. After describing conceptual and operational difficulties in quantifying pleiotropy and the pros and cons of various methods for measuring pleiotropy, I review empirical data on pleiotropy, which generally show an L-shaped distribution of the degree of pleiotropy (i.e., the number of traits affected), with most genes having low pleiotropy. I then review the current understanding of the molecular basis of pleiotropy. In the rest of the review, I discuss evolutionary consequences of pleiotropy, focusing on advances in topics including the cost of complexity, regulatory versus coding evolution, environmental pleiotropy and adaptation, evolution of ageing and other seemingly harmful traits, and evolutionary resolution of pleiotropy.
Ecosystems
Energetics and evolutionary fitness
It has long been recognized that energy is the currency of evolution, but contrasting conceptions of the relationship between energy and adaptation have yielded different interpretations. In the equal fitness paradigm (EFP), fitness (defined as the energetic equivalent of surviving offspring per generation) is held to be a constant within and between species in a steady-state, zero-sum closed system of constant energy availability. The fossil record, however, indicates that living space and energy availability and access have increased over time in response to pervasive natural selection that favors traits conferring as much power (energy per unit time) as possible, given the constraints imposed by external conditions and other organisms. Through various collaborative relationships and power-enhancing innovations, the inevitable tradeoffs among reproductive parameters envisioned in the EFP are relaxed enough that allocation to offspring can vary in space and time. We suggest that the EFP applies only under highly specific conditions of local constancy of energy availability and not to the biosphere as a whole. We note that fitness, however defined, need therefore not be constant and is difficult to measure in extant species and to infer with proxies in fossil taxa.
Anthropology
Yes, it was almost inevitable to feature the human dynamics in the past, at some point. Since our past shapes our future.
Major expansion in the human niche preceded out of Africa dispersal
This increased ability to adapt to new habitats, ranging from the extremes of equatorial forests to arid deserts, would have allowed these populations of humans the ecological flexibility to tackle a range of new environmental conditions encountered during the expansion out of Africa, allowing them to succeed where earlier migrations out of Africa had previously faltered

All contemporary Eurasians trace most of their ancestry to a small population that dispersed out of Africa about 50,000 years ago (ka)1,2,3,4,5,6,7,8,9. By contrast, fossil evidence attests to earlier migrations out of Africa10,11,12,13,14,15. These lines of evidence can only be reconciled if early dispersals made little to no genetic contribution to the later, major wave. A key question therefore concerns what factors facilitated the successful later dispersal that led to long-term settlement beyond Africa. Here we show that a notable expansion in human niche breadth within Africa precedes this later dispersal. We assembled a pan-African database of chronometrically dated archaeological sites and used species distribution models (SDMs) to quantify changes in the bioclimatic niche over the past 120,000 years. We found that the human niche began to expand substantially from 70 ka and that this expansion was driven by humans increasing their use of diverse habitat types, from forests to arid deserts. Thus, humans dispersing out of Africa after 50 ka were equipped with a distinctive ecological flexibility among hominins as they encountered climatically challenging habitats, providing a key mechanism for their adaptive success.
AI and Data Science
Deep-learning-aided dismantling of interdependent networks
Identifying the minimal set of nodes whose removal breaks a complex network apart, also referred as the network dismantling problem, is a highly non-trivial task with applications in multiple domains. Whereas network dismantling has been extensively studied over the past decade, research has primarily focused on the optimization problem for single-layer networks, neglecting that many, if not all, real networks display multiple layers of interdependent interactions. In such networks, the optimization problem is fundamentally different as the effect of removing nodes propagates within and across layers in a way that can not be predicted using a single-layer perspective. Here we propose a dismantling algorithm named MultiDismantler, which leverages multiplex network representation and deep reinforcement learning to optimally dismantle multilayer interdependent networks. MultiDismantler is trained on small synthetic graphs; when applied to large, either real or synthetic, networks, it displays exceptional dismantling performance, clearly outperforming all existing benchmark algorithms. We show that MultiDismantler is effective in guiding strategies for the containment of diseases in social networks characterized by multiple layers of social interactions. Also, we show that MultiDismantler is useful in the design of protocols aimed at delaying the onset of cascading failures in interdependent critical infrastructures.