If you find Complexity Thoughts interesting, follow us! Click on the Like button, leave a comment, repost on Substack or share this post. It is the only feedback I can have for this free service. The frequency and quality of this newsletter relies on social interactions. Thank you!
→ Don’t miss the podcast version of this post: click on “Spotify/Apple Podcast” above!
Foundations of network science and complex systems
Probabilistic alignment of multiple networks
OK, let us imagine that our problem is to match neurons across (different measured instances of) brains or users across social networks, when all we've got is messy, incomplete structure and a lot of guesswork. That’s the network alignment problem and most existing tools just throw heuristics at it.
This paper does something refreshingly different: it takes a principled, probabilistic approach. Instead of forcing a single "best" match, it samples the full space of plausible alignments, uncovering structure where others miss it. No black-box hacks: just a grounded, transparent model that actually works when things get noisy or weird.
You can read more about it on Substack too.

The network alignment problem appears in many areas of science and involves finding the optimal mapping between nodes in two or more networks, so as to identify corresponding entities across networks. We propose a probabilistic approach to the problem of network alignment, as well as the corresponding inference algorithms. Unlike heuristic approaches, our approach is transparent in that all model assumptions are explicit; therefore, it is susceptible of being extended and fine tuned by incorporating contextual information that is relevant to a given alignment problem. Also in contrast to current approaches, our method does not yield a single alignment, but rather the whole posterior distribution over alignments. We show that using the whole posterior leads to correct matching of nodes, even in situations where the single most plausible alignment mismatches them. Our approach opens the door to a whole new family of network alignment algorithms, and to their application to problems for which existing methods are perhaps inappropriate.
Quantum Systems
Unveiling the importance of nonshortest paths in quantum networks
Quantum networks (QNs) exhibit stronger connectivity than predicted by classical percolation, yet the origin of this phenomenon remains unexplored. We apply a statistical physics model—concurrence percolation—to uncover the origin of stronger connectivity on hierarchical scale-free networks, the (U, V) flowers. These networks allow full analytical control over path connectivity through two adjustable path-length parameters, ≤V. This precise control enables us to determine critical exponents well beyond current simulation limits, revealing that classical and concurrence percolations, while both satisfying the hyperscaling relation, fall into distinct universality classes. This distinction arises from how they “superpose” parallel, nonshortest path contributions into overall connectivity. Concurrence percolation, unlike its classical counterpart, is sensitive to nonshortest paths and shows higher resilience to detours as these paths lengthen. This enhanced resilience is also observed in real-world hierarchical, scale-free internet networks. Our findings highlight a crucial principle for QN design: When nonshortest paths are abundant, they notably enhance QN connectivity beyond what is achievable with classical percolation.
Evolution
Evolutionary tinkering enriches the hierarchical and nested structures in amino acid sequences
Genetic information often exhibits hierarchical and nested relationships, achieved through the reuse of repetitive subsequences such as duplicons and transposable elements, a concept termed “evolutionary tinkering” by François Jacob. Current bioinformatics tools often struggle to capture these, particularly the nested, relationships. To address this, we utilized ladderpath, an approach within the broader category of algorithmic information theory, introducing two key measures: order rate 𝜂 for characterizing sequence pattern repetitions and regularities, and ladderpath-complexity 𝜅 for assessing hierarchical and nested richness. Our analysis of amino acid sequences revealed that humans have more sequences with higher 𝜅 values, and proteins with many intrinsically disordered regions exhibit increased 𝜂 values. Additionally, it was found that extremely long sequences with low 𝜂 are rare. We hypothesize that this arises from varied duplication and mutation frequencies across different evolutionary stages, which in turn suggests a zigzag pattern for the evolution of protein complexity. This is supported by simulations and studies of protein families such as ubiquitin and NBPF, implying species-specific or environment-influenced protein elongation strategies. The ladderpath approach offers a quantitative lens to understand evolutionary tinkering and reuse, shedding light on the generative aspects of biological structures.
Ecosystems
Thresholds for ecological responses to global change do not emerge from empirical data
Bonus: if you can’t access it via the journal, click here
To understand ecosystem responses to anthropogenic global change, a prevailing framework is the definition of threshold levels of pressure, above which response magnitudes and their variances increase disproportionately. However, we lack systematic quantitative evidence as to whether empirical data allow definition of such thresholds. Here, we summarize 36 meta-analyses measuring more than 4,600 global change impacts on natural communities. We find that threshold transgressions were rarely detectable, either within or across meta-analyses. Instead, ecological responses were characterized mostly by progressively increasing magnitude and variance when pressure increased. Sensitivity analyses with modelled data revealed that minor variances in the response are sufficient to preclude the detection of thresholds from data, even if they are present. The simulations reinforced our contention that global change biology needs to abandon the general expectation that system properties allow defining thresholds as a way to manage nature under global change. Rather, highly variable responses, even under weak pressures, suggest that ‘safe-operating spaces’ are unlikely to be quantifiable.
Biological Systems
Open problems in synthetic multicellularity
Multicellularity is one of the major evolutionary transitions, and its rise provided the ingredients for the emergence of a biosphere inhabited by complex organisms. Over the last decades, the potential for bioengineering multicellular systems has been instrumental in interrogating nature and exploring novel paths to regeneration, disease, cognition, and behaviour. Here, we provide a list of open problems that encapsulate many of the ongoing and future challenges in the field and suggest conceptual approaches that may facilitate progress.
One major challenge for synthetic multicellular designs is associated with the lack of predictability that would be in place if the genotype-phenotype map were simple and different scales reducible to lower-level entities.

The “Waddington demon'': an idealized entity trying to predict higher-scale structures (organs, embryos or organisms) from the observable microscopic dynamics (genes, gene interactions and early developmental states). This cartoon summarizes the difficulties in predicting multicellular complexity, both within developmental biology and in bioengineering. Because of emergent phenomena, such a prediction might be difficult to achieve unless we use the right scale, ignoring the details on the lower levels.
Human behavior
Order–disorder transition in multidirectional crowds
Human crowds can assume various dynamical states: flowing, congested, chaotic, self-organized, etc. The dynamical characteristics impact the safety of the crowd, but predicting what type of pedestrian flow ensues in a given situation is not straightforward. Here, we characterize the transition from disorderly motion to self-organized order in multidirectional crowds, e.g. on an urban plaza. The nature of the flow depends on the geometry of the concourse; more precisely, on the angular spread parameter, which quantifies the distribution of walking directions. Through mathematical analysis, agent-based simulations, and controlled crowd experiments, we show that the order–disorder transition occurs at a predictable value of the angular spread, and we measure how the loss of order reduces the efficiency of motion.
One of the archetypal examples of active flows is a busy concourse crossed by people moving in different directions according to their personal destinations. When the crowd is isotropic—comprising individuals moving in all different directions—the collective motion is disordered. In contrast, if it is possible to identify two dominant directions of motion, for example in a corridor, the crowd spontaneously organizes into contraflowing lanes or stripes. In this article, we characterize the physics of the transition between these two distinct phases by using a synergy of theoretical analysis, numerical simulations, and stylized experiments. We develop a hydrodynamic theory for collisional flows of heterogeneous populations, and we analyze the stability of the disordered configuration. We identify an order–disorder transition occurring as population heterogeneity exceeds a theoretical threshold determined by the collision avoidance maneuvers of the crowd. Our prediction for the onset of pedestrian ordering is consistent with results of agent-based simulations and controlled experiments with human crowds.
(Explainable) AI-based methods and applications
Graph coloring framework to mitigate cascading failure in complex networks
Cascading failures pose a significant threat to the stability and functionality of complex systems, making their mitigation a crucial area of research. While existing strategies aim to enhance network robustness, identifying an optimal set of critical nodes that mediates the cascade for protection remains a challenging task. Here, we present a robust and pragmatic framework that effectively mitigates the cascading failures by strategically identifying and securing critical nodes within the network. Our approach leverages a graph coloring technique to identify the critical nodes using the local network topology, and results in a minimal set of critical nodes to be protected yet maximally effective in mitigating the cascade thereby retaining a large fraction of the network intact. Our method outperforms existing mitigation strategies across diverse network configurations and failure scenarios. An extensive empirical validation using real-world networks highlights the practical utility of our framework, offering a promising tool for enhancing network robustness in complex systems.
If "threshold transgressions are rarely detectable", what does this mean for tipping poiints as a type of ecological threshold?