If you find Complexity Thoughts, click on the Like button, leave a comment, repost on Substack or share this post. It is the only feedback I can have for this free service.
The frequency and quality of this newsletter relies on social interactions. Thank you!
Dall-e 3 representation of this issue’s content
In a nutshell
In this Issue we have a range of studies spanning topics from the dynamics of physical aging in disordered systems to evolutionary behavior on dynamic social networks. We touch upon advancements in neuroscience, notably brain-wide behavioral representations, the (adversarial) role of cortical interneurons in sensory learning and the evidence (from box jellyfish) that associative learning does not require complex neuronal circuitry. For bio-inspired computing, we highlight the emergence of structural balance in sparse neural networks. We conclude with a study underscoring the ecological crisis, emphasizing the urgent need for corrective actions against mass extinction events.
Network science and complex systems foundations
Logarithmic aging via instability cascades in disordered systems
Tackling intermittent behavior, instabilities and self-similarity in cascades:
Many complex and disordered systems fail to reach equilibrium after they have been quenched or perturbed. Instead, they sluggishly relax toward equilibrium at an ever-slowing, history-dependent rate, a process termed physical aging. The microscopic processes underlying the dynamic slow-down during aging and the reason for its similar occurrence in different systems remain poorly understood. Here, we reveal the structural mechanism underlying logarithmic aging in disordered mechanical systems through experiments in crumpled sheets and simulations of a disordered network of bistable elastic elements. We show that under load, the system self-organizes to a metastable state poised on the verge of an instability, where it can remain for long, but finite, times. The system’s relaxation is intermittent, advancing via rapid sequences of instabilities, grouped into self-similar, aging avalanches. Crucially, the quiescent dwell times between avalanches grow in proportion to the system’s age, due to a slow increase of the lowest effective energy barrier, which leads to logarithmic aging.
Unifying pairwise interactions in complex dynamics
To read with the accompanying News&Views:
Scientists have developed hundreds of techniques to measure the interactions between pairs of processes in complex systems, but these computational methods—from contemporaneous correlation coefficients to causal inference methods—define and formulate interactions differently, using distinct quantitative theories that remain largely disconnected. Here we introduce a large assembled library of 237 statistics of pairwise interactions, and assess their behavior on 1,053 multivariate time series from a wide range of real-world and model-generated systems. Our analysis highlights commonalities between disparate mathematical formulations of interactions, providing a unified picture of a rich interdisciplinary literature. Using three real-world case studies, we then show that simultaneously leveraging diverse methods can uncover those most suitable for addressing a given problem, facilitating interpretable understanding of the quantitative formulation of pairwise dependencies that drive successful performance. Our results and accompanying software enable comprehensive analysis of time-series interactions by drawing on decades of diverse methodological contributions.
Strategy evolution on dynamic networks
Temporal dynamics can evolutionary behavior (see also the accompanying News&Views):
Models of strategy evolution on static networks help us understand how population structure can promote the spread of traits like cooperation. One key mechanism is the formation of altruistic spatial clusters, where neighbors of a cooperative individual are likely to reciprocate, which protects prosocial traits from exploitation. However, most real-world interactions are ephemeral and subject to exogenous restructuring, so that social networks change over time. Strategic behavior on dynamic networks is difficult to study, and much less is known about the resulting evolutionary dynamics. Here we provide an analytical treatment of cooperation on dynamic networks, allowing for arbitrary spatial and temporal heterogeneity. We show that transitions among a large class of network structures can favor the spread of cooperation, even if each individual social network would inhibit cooperation when static. Furthermore, we show that spatial heterogeneity tends to inhibit cooperation, whereas temporal heterogeneity tends to promote it. Dynamic networks can have profound effects on the evolution of prosocial traits, even when individuals have no agency over network structures.
Inferring Topology of Networks With Hidden Dynamic Variables
Inferring the network topology from the dynamics of interacting units constitutes a topical challenge that drives research on its theory and applications across physics, mathematics, biology, and engineering. Most current inference methods rely on time series data recorded from all dynamical variables in the system. In applications, often only some of these time series are accessible, while other units or variables of all units are hidden, i.e. inaccessible or unobserved. For instance, in AC power grids, frequency measurements often are easily available whereas determining the phase relations among the oscillatory units requires much more effort. Here, we propose a network inference method that allows to reconstruct the full network topology even if all units exhibit hidden variables. We illustrate the approach in terms of a basic AC power grid model with two variables per node, the local phase angle and the local instantaneous frequency. Based solely on frequency measurements, we infer the underlying network topology as well as the relative phases that are inaccessible to measurement. The presented method may be enhanced to include systems with more complex coupling functions and additional parameters such as losses in power grid models. These results may thus contribute towards developing and applying novel network inference approaches in engineering, biology and beyond.
Connecting cooperative transport by ants with the physics of self-propelled particles
Switching the perspective and treating multiple microscopic ants a single self-propelled particle!
Paratrechina longicornis ants are known for their ability to cooperatively transport large food items. Previous studies have focused on the behavioral rules of individual ants and explained the efficient coordination using the coupled-carrier model. In contrast to this microscopic description, we instead treat the transported object as a single self-propelled particle characterized by its velocity magnitude and angle. We experimentally observe P. longicornis ants cooperatively transporting loads of varying radii. By analyzing the statistical features of the load's movement, we show that its salient properties are well captured by a set of Langevin equations describing a self-propelled particle. We relate the parameters of our macroscopic model to microscopic properties of the system. While the autocorrelation time of the velocity direction increases with group size, the autocorrelation time of the speed has a maximum at an intermediate group size. This corresponds to the critical slowdown close to the phase transition identified in the coupled-carrier model. Our findings illustrate that a self-propelled particle model can effectively characterize a system of interacting individuals.
Network and systems neuroscience
Brain-wide representations of behavior spanning multiple timescales and states in C. elegans
Changes in an animal’s behavior and internal state are accompanied by widespread changes in activity across its brain. However, how neurons across the brain encode behavior and how this is impacted by state is poorly understood. We recorded brain-wide activity and the diverse motor programs of freely moving C. elegans and built probabilistic models that explain how each neuron encodes quantitative behavioral features. By determining the identities of the recorded neurons, we created an atlas of how the defined neuron classes in the C. elegans connectome encode behavior. Many neuron classes have conjunctive representations of multiple behaviors. Moreover, although many neurons encode current motor actions, others integrate recent actions. Changes in behavioral state are accompanied by widespread changes in how neurons encode behavior, and we identify these flexible nodes in the connectome. Our results provide a global map of how the cell types across an animal’s brain encode its behavior.
Associative learning in the box jellyfish Tripedalia cystophora
Does associative learning require complex neuronal circuitry? Short answer: No!
Associative learning, such as classical or operant conditioning, has never been unequivocally associated with animals outside bilatarians, e.g., vertebrates, arthropods, or mollusks. Learning modulates behavior and is imperative for survival in the vast majority of animals. Obstacle avoidance is one of several visually guided behaviors in the box jellyfish, Tripedalia cystophora Conant, 1897 (Cnidaria: Cubozoa), and it is intimately associated with foraging between prop roots in their mangrove habitat. The obstacle avoidance behavior (OAB) is a species-specific defense reaction (SSDR) for T. cystophora, so identifying such SSDR is essential for testing the learning capacity of a given animal. Using the OAB, we show that box jellyfish performed associative learning (operant conditioning). We found that the rhopalial nervous system is the learning center and that T. cystophora combines visual and mechanical stimuli during operant conditioning. Since T. cystophora has a dispersed central nervous system lacking a conventional centralized brain, our work challenges the notion that associative learning requires complex neuronal circuitry. Moreover, since Cnidaria is the sister group to Bilateria, it suggests the intriguing possibility that advanced neuronal processes, like operant conditioning, are a fundamental property of all nervous systems.
A role for cortical interneurons as adversarial discriminators
The brain learns representations of sensory information from experience, but the algorithms by which it does so remain unknown. One popular theory formalizes representations as inferred factors in a generative model of sensory stimuli, meaning that learning must improve this generative model and inference procedure. This framework underlies many classic computational theories of sensory learning, such as Boltzmann machines, the Wake/Sleep algorithm, and a more recent proposal that the brain learns with an adversarial algorithm that compares waking and dreaming activity. However, in order for such theories to provide insights into the cellular mechanisms of sensory learning, they must be first linked to the cell types in the brain that mediate them. In this study, we examine whether a subtype of cortical interneurons might mediate sensory learning by serving as discriminators, a crucial component in an adversarial algorithm for representation learning. We describe how such interneurons would be characterized by a plasticity rule that switches from Hebbian plasticity during waking states to anti-Hebbian plasticity in dreaming states. Evaluating the computational advantages and disadvantages of this algorithm, we find that it excels at learning representations in networks with recurrent connections but scales poorly with network size. This limitation can be partially addressed if the network also oscillates between evoked activity and generative samples on faster timescales. Consequently, we propose that an adversarial algorithm with interneurons as discriminators is a plausible and testable strategy for sensory learning in biological systems.
Bio-inspired computing
Universal structural patterns in sparse recurrent neural networks
Can structural balance emerge in recurrent neural networks? Yes:
Sparse neural networks can achieve performance comparable to fully connected networks but need less energy and memory, showing great promise for deploying artificial intelligence in resource-limited devices. While significant progress has been made in recent years in developing approaches to sparsify neural networks, artificial neural networks are notorious as black boxes, and it remains an open question whether well-performing neural networks have common structural features. Here, we analyze the evolution of recurrent neural networks (RNNs) trained by different sparsification strategies and for different tasks, and explore the topological regularities of these sparsified networks. We find that the optimized sparse topologies share a universal pattern of signed motifs, RNNs evolve towards structurally balanced configurations during sparsification, and structural balance can improve the performance of sparse RNNs in a variety of tasks. Such structural balance patterns also emerge in other state-of-the-art models, including neural ordinary differential equation networks and continuous-time RNNs. Taken together, our findings not only reveal universal structural features accompanying optimized network sparsification but also offer an avenue for optimal architecture searching.
Ecosystems
Mutilation of the tree of life via mass extinction of animal genera
We are in the sixth mass extinction event. Unlike the previous five, this one is caused by the overgrowth of a single species, Homo sapiens. Although the episode is often viewed as an unusually fast (in evolutionary time) loss of species, it is much more threatening, because beyond that loss, it is causing rapid mutilation of the tree of life, where entire branches (collections of species, genera, families, and so on) and the functions they perform are being lost. It is changing the trajectory of evolution globally and destroying the conditions that make human life possible. It is an irreversible threat to the persistence of civilization and the livability of future environments for H. sapiens. Instant corrective actions are required.
Thanks for sharing these papers! Especially the paper about the universal patterns in sparse RNNs. Remember the last time we had a spritz several days ago? After that, Arsham, Mojtaba, and me walked home talking all the way about machine learning models. Arsham asked me some questions about the sparsity of the neural network and I only answered about the pruning, with this paper(and its reference) I learned a lot more!