If you find Complexity Thoughts interesting, follow us! Click on the Like button, leave a comment, repost on Substack or share this post. It is the only feedback I can have for this free service. The frequency and quality of this newsletter relies on social interactions. Thank you!
→ Don’t miss the podcast version of this post: click on “Spotify/Apple Podcast” above!
Foundations of network science and complex systems
Effective One-Dimensional Reduction of Multicompartment Complex Systems Dynamics
Many real-world problems—from disease outbreaks to infrastructure failure and ecosystem collapse—can be studied using compartmental models. In these models, the system is broken down into categories, or compartments, and the dynamics of how individuals or units move between these compartments are analyzed. This approach is particularly useful for studying phase transitions, where a small change can lead to a large, often irreversible shift in the system. However, existing mathematical tools struggle to fully capture the complexity of systems with many compartments, especially when randomness plays a big role. To overcome this, we develop a new, unified framework using the Doi-Peliti path integral method, a technique borrowed from quantum field theory.
Our approach maps complex multicompartment systems onto a simpler Hamiltonian system, a formalism often used in physics to describe energy dynamics. In the stationary limit, our method simplifies the entire system to a 1D model that still captures the critical behavior. This allows us to analyze complex models more easily, and we confirm that our method recovers known results. We test it on several examples, especially in ecology and epidemiology, showing that it works well even in challenging cases.
This framework provides a general and physically grounded way to study random behavior in complex systems. Looking ahead, our work opens the door to more accurate and comprehensive studies of phase transitions in a wide variety of systems.
This is one paper from our lab, in collaboration with a former member of the lab. I am really proud about this work, since we have exploited the combination of the path integral representation of stochastic processes with a dimension reduction scheme to better understand the critical properties and the stationary behavior of coupled compartmental stochastic processes. This formalism could provide new perspectives and physical insights into, and beyond, the characterization of phase transitions and fluctuations of multi-compartment complex systems.
A broad class of systems, including ecological, epidemiological, and sociological ones, are characterized by populations of individuals assigned to specific categories, e.g., a chemical species, an opinion, or an epidemic state, that are modeled as compartments. Because of interactions and intrinsic dynamics, the system units are allowed to change category, leading to concentrations varying over time with complex behavior, typical of reaction-diffusion systems. While compartmental modeling provides a powerful framework for studying the dynamics of such populations and describe the spatiotemporal evolution of a system, it mostly relies on deterministic mean-field descriptions to deal with systems with many degrees of freedom. Here, we propose a method to alleviate some of the limitations of compartmental models by capitalizing on tools originating from quantum physics to systematically reduce multidimensional systems to an effective one-dimensional representation. Using this reduced system, we are able not only to investigate the mean-field dynamics and their critical behavior, but we can additionally study stochastic representations that capture fundamental features of the system. We demonstrate the validity of our formalism by studying the critical behavior of models widely adopted to study epidemic, ecological, and economic systems.
Emergent hypernetworks in weakly coupled oscillators
Networks of weakly coupled oscillators had a profound impact on our understanding of complex systems. Studies on model reconstruction from data have shown prevalent contributions from hypernetworks with triplet and higher interactions among oscillators, in spite that such models were originally defined as oscillator networks with pairwise interactions. Here, we show that hypernetworks can spontaneously emerge even in the presence of pairwise albeit nonlinear coupling given certain triplet frequency resonance conditions. The results are demonstrated in experiments with electrochemical oscillators and in simulations with integrate-and-fire neurons. By developing a comprehensive theory, we uncover the mechanism for emergent hypernetworks by identifying appearing and forbidden frequency resonant conditions. Furthermore, it is shown that microscopic linear (difference) coupling among units results in coupled mean fields, which have sufficient nonlinearity to facilitate hypernetworks. Our findings shed light on the apparent abundance of hypernetworks and provide a constructive way to predict and engineer their emergence.
The bunkbed conjecture is false
The bunkbed conjecture, proposed in 1985, addresses whether certain probabilities in network models, specifically in percolation theory, exhibit a predictable monotonic behavior. While the conjecture has been supported in specific cases, its general validity remained an open problem. This study presents a counterexample, disproving the conjecture and revealing that the expected monotonic behavior does not hold universally. The disproof is based on the careful analysis of a particular hypergraph percolation model. Our results add to the understanding of probabilistic models on graphs and hypergraphs, particularly in percolation theory.
We give an explicit counterexample to the bunkbed conjecture introduced by Kasteleyn in 1985. The counterexample is given by a planar graph on 7,222 vertices and is built on the recent work of Hollom (2024).
Understanding Braess’ Paradox in power grids
Check also this research highlight.

The ongoing energy transition requires power grid extensions to connect renewable generators to consumers and to transfer power among distant areas. The process of grid extension requires a large investment of resources and is supposed to make grid operation more robust. Yet, counter-intuitively, increasing the capacity of existing lines or adding new lines may also reduce the overall system performance and even promote blackouts due to Braess’ paradox. Braess’ paradox was theoretically modeled but not yet proven in realistically scaled power grids. Here, we present an experimental setup demonstrating Braess’ paradox in an AC power grid and show how it constrains ongoing large-scale grid extension projects. We present a topological theory that reveals the key mechanism and predicts Braessian grid extensions from the network structure. These results offer a theoretical method to understand and practical guidelines in support of preventing unsuitable infrastructures and the systemic planning of grid extensions.
Synchronization stability in simplicial complexes of near-identical systems
Assessing the stability of synchronization is a fundamental task when studying networks of dynamical systems. However, this becomes challenging when the coupled systems are not exactly identical, as is always the case in practical settings. Here we introduce an extension of the Master Stability Function to determine near-synchronization stability within simplicial complexes of nearly identical systems coupled by synchronization-noninvasive functions. We validate our method on a simplicial complex of Lorenz oscillators, finding a good correspondence between the predicted regions of stability and those observed via direct simulation. This confirms the correctness of our approach, making it a valuable tool for the evaluation of real-world systems, in which differences between the constitutive elements are unavoidable.
Ecosystems
Optimization hardness constrains ecological transients
Distinct species can serve overlapping functions in complex ecosystems. For example, multiple cyanobacteria species within a microbial mat might serve to fix nitrogen. Here, we show mathematically that such functional redundancy can arbitrarily delay an ecosystem’s approach to equilibrium. We draw a mathematical analogy between this difficult equilibration process, and the complexity of computer algorithms like matrix inversion or numerical optimization. We show that this computational complexity manifests as a transient chaos in an ecosystem’s dynamics, allowing us to develop scaling laws for the expected length of transients in complex ecosystems. Transient chaos also produces strong sensitivity on the duration and route that the system takes towards equilibrium, affecting the ecosystem’s response to perturbations. Our results highlight the physical implications of computational complexity for large biological networks.
Moreover, transient chaos implies that even an ecosystem that has successfully reached equilibrium is vulnerable to disruption, because transiently chaotic systems can undergo extended excursions away from equilibrium, if subjected to perturbations exceeding a finite threshold. Our results thus highlight that classical equilibrium-based analysis methods fail to fully characterize high-dimensional ecosystems, for which steady-state is an exception, not the rule
Living systems operate far from equilibrium, yet few general frameworks provide global bounds on biological transients. In high-dimensional biological networks like ecosystems, long transients arise from the separate timescales of interactions within versus among subcommunities. Here, we use tools from computational complexity theory to frame equilibration in complex ecosystems as the process of solving an analogue optimization problem. We show that functional redundancies among species in an ecosystem produce difficult, ill-conditioned problems, which physically manifest as transient chaos. We find that the recent success of dimensionality reduction methods in describing ecological dynamics arises due to preconditioning, in which fast relaxation decouples from slow solving timescales. In evolutionary simulations, we show that selection for steady-state species diversity produces ill-conditioning, an effect quantifiable using scaling relations originally derived for numerical analysis of complex optimization problems. Our results demonstrate the physical toll of computational constraints on biological dynamics.
Biological Systems
Leaf venation network evolution across clades and scales
Leaf venation architecture varies greatly among living and fossil plants. However, we still have a limited understanding of when, why and in which clades new architectures arose and how they impacted leaf functioning. Using data from 1,000 extant and extinct (fossil) plants, we reconstructed approximately 400 million years of venation evolution across clades and vein sizes. Overall, venation networks evolved from having fewer veins and less smooth loops to having more veins and smoother loops, but these changes only occurred in small and medium vein sizes. The diversity of architectural designs increased biphasically, first peaking in the Paleozoic, then decreasing during the Cretaceous, then increasing again in the Cenozoic, when recent angiosperm lineages initiated a second and ongoing phase of diversification. Vein evolution was not associated with temperature and CO2 fluctuations but was associated with insect diversification. Our results highlight the complexity of the evolutionary trajectory and potential drivers of venation network architecture.
This study provided an updated and data-rich perspective on leaf venation networks over 400 million years of time. It showed the contingency, complexity, scale dependence and nonlinearity of the evolutionary process, building a more detailed understanding of plant evolution
Human behavior
How media competition fuels the spread of misinformation
Competition among news sources over public opinion can incentivize them to resort to misinformation. Sharing misinformation may lead to a short-term gain in audience engagement but ultimately damages the credibility of the source, resulting in a loss of audience. To understand the rationale behind news sources sharing misinformation, we model the competition between sources as a zero-sum sequential game, where news sources decide whether to share factual information or misinformation. Each source influences individuals based on their credibility, the veracity of the article, and the individual’s characteristics. We analyze this game through the concept of quantal response equilibrium, which accounts for the bounded rationality of human decision-making. The analysis shows that the resulting equilibria reproduce the credibility-opinion distribution of real-world news sources, with hyperpartisan sources spreading the majority of misinformation. Our findings provide insights for policymakers to mitigate the spread of misinformation and promote a more factual information landscape.
Earth & global systems
Human influence on climate detectable in the late 19th century
Here we pose a simple question: when could scientists have first known that fossil fuel burning was significantly altering global climate?
When could scientists have first known that fossil fuel burning was significantly altering global climate? We attempt to answer this question by performing a thought experiment with model simulations of historical climate change. We assume that the capability to monitor global-scale changes in atmospheric temperature existed as early as 1860 and that the instruments available in this hypothetical world had the same accuracy as today’s satellite-borne microwave radiometers. We then apply a pattern-based “fingerprint” method to disentangle human and natural effects on climate. A human-caused stratospheric cooling signal would have been identifiable by approximately 1885, before the advent of gas-powered cars. Our results suggest that a discernible human influence on atmospheric temperature has likely existed for over 130 y.
The physics of the heat-trapping properties of CO2 were established in the mid-19th century, as fossil fuel burning rapidly increased atmospheric CO2 levels. To date, however, research has not probed when climate change could have been detected if scientists in the 19th century had the current models and observing network. We consider this question in a thought experiment with state-of-the-art climate models. We assume that the capability to make accurate measurements of atmospheric temperature changes existed in 1860, and then apply a standard “fingerprint” method to determine the time at which a human-caused climate change signal was first detectable. Pronounced cooling of the mid- to upper stratosphere, mainly driven by anthropogenic increases in carbon dioxide, would have been identifiable with high confidence by approximately 1885, before the advent of gas-powered cars. These results arise from the favorable signal-to-noise characteristics of the mid- to upper stratosphere, where the signal of human-caused cooling is large and the pattern of this cooling differs markedly from patterns of intrinsic variability. Even if our monitoring capability in 1860 had not been global, and high-quality stratospheric temperature measurements existed for Northern Hemisphere mid-latitudes only, it still would have been feasible to detect human-caused stratospheric cooling by 1894, only 34 y after the assumed start of climate monitoring. Our study provides strong evidence that a discernible human influence on atmospheric temperature has likely existed for over 130 y.
New to your substack. Interested to know if your work touches on autogenertive structures as being foundational in the work you are doing?