Complexity Thoughts: Issue #20
Unraveling complexity: building knowledge, one paper at a time
If you find Complexity Thoughts, click on the Like button, leave a comment, repost on Substack or share this post. It is the only feedback I can have for this free service.
The frequency and quality of this newsletter relies on social interactions. Thank you!
Dall-e 3 representation of this issue’s content
In a nutshell
This issue starts with a historical perspective on the 1970s emergence of chaos theory, highlighting how nonlinear dynamical systems theory and numerical simulations have been influenced by key contributors. As another foundational study, we have the use of automatic differentiation for optimizing control in nonequilibrium systems, a significant advancement for nanodevice design. Another work introduces spatially embedded recurrent neural networks, illustrating how structural and functional features of brain networks can emerge from biological optimization processes.
We have a Perspective addressing the necessity of social contexts for developing human-like artificial intelligence, proposing that natural intelligence evolves through collective living and social interactions. For evolutionary biology, there is a paper revisiting fitness landscape theory, suggesting that rugged landscapes can actually facilitate Darwinian evolution, contrary to traditional beliefs. For neuroscience, a study proposes the role of bioelectric fields in memory formation, potentially acting as control parameters. The 'Expensive Brain' hypothesis is explored, discussing strategies for managing the brain's high metabolic demands. In living systems physics, the interplay between mechanochemical patterns and glassy dynamics in cellular monolayers is analyzed, revealing insights into tissue behavior. Concerning epidemiology, the effectiveness of the COVID-19 Scenario Modeling Hub during the pandemic is highlighted, showcasing its role in guiding policy decisions and the relevance of modeling complex systems (especially during a crisis). From the social sciences, we have: (i) a study about the importance of structured information-sharing networks in reducing medical errors is emphasized, demonstrating their impact on clinical decision-making; (ii) a piece about the polarization of political opinions, using perspectives from statistical physics and network science, featuring a lot of interesting recent works. Lot of food for thoughts!
Thanks for reading Complexity Thoughts! Subscribe for free to receive new posts and support my work.
I’d like to mention that this week #ComplexityThoughts was presented during an international conference (thanks Luis Rocha for the support!), where I gave a keynote talk about a framework to analyze network function, beyond structure. I'll write more about this in a dedicated post, hopefully before Xmas.
Foundations of network science and complex systems
What a nice idea! A very entertaining piece about how some of the main actors in the field influenced each other with ideas and research. What an exciting period the 70s!
Writing a history of a scientific theory is always difficult because it requires to focus on some key contributors and to “reconstruct” some supposed influences. In the 1970s, a new way of performing science under the name “chaos” emerged, combining the mathematics from the nonlinear dynamical systems theory and numerical simulations. To provide a direct testimony of how contributors can be influenced by other scientists or works, we here collected some writings about the early times of a few contributors to chaos theory. The purpose is to exhibit the diversity in the paths and to bring some elements—which were never published—illustrating the atmosphere of this period. Some peculiarities of chaos theory are also discussed.
“If synthetic nanotechnology is ever to rival the performance of biological systems, we must understand how to harness nonequilibrium physics, tuning external control parameters to achieve optimal results”. Could not be said better…
Controlling the evolution of nonequilibrium systems to minimize dissipated heat or work is a key goal for designing nanodevices, in both nanotechnology and biology. Progress in computing optimal protocols to extremize thermodynamic variables has, thus far, been limited to either simple systems or near-equilibrium evolution. Here, we present an approach for computing optimal protocols based on automatic differentiation. Our methodology is applicable to complex systems and multidimensional protocols and is valid arbitrarily far from equilibrium. We validate our method by reproducing theoretical optimal protocols for a Brownian particle in a time-varying harmonic trap. We also compute departures from near-equilibrium behavior for magnetization reversal on an Ising lattice and for barrier crossing driven by a harmonic trap, which is used to represent a range of biological processes including biomolecular unfolding reactions. Algorithms based on automatic differentiation outperform the near-equilibrium theory for far-from-equilibrium magnetization reversal and for driven barrier crossing beyond the linear regime. The optimal protocol for far-from-equilibrium driven barrier crossing is found to hasten the approach to, and slow the departure from, the barrier region compared to the near-equilibrium theoretical protocol. We demonstrate the utility of our method in a real-world use case by reducing the work required to unfold a DNA hairpin in the coarse-grained oxDNA model and improving its nonequilibrium free-energy landscape reconstruction compared to a naive linear protocol.
Brain networks exist within the confines of resource limitations. As a result, a brain network must overcome the metabolic costs of growing and sustaining the network within its physical space, while simultaneously implementing its required information processing. Here, to observe the effect of these processes, we introduce the spatially embedded recurrent neural network (seRNN). seRNNs learn basic task-related inferences while existing within a three-dimensional Euclidean space, where the communication of constituent neurons is constrained by a sparse connectome. We find that seRNNs converge on structural and functional features that are also commonly found in primate cerebral cortices. Specifically, they converge on solving inferences using modular small-world networks, in which functionally similar units spatially configure themselves to utilize an energetically efficient mixed-selective code. Because these features emerge in unison, seRNNs reveal how many common structural and functional brain motifs are strongly intertwined and can be attributed to basic biological optimization processes. seRNNs incorporate biophysical constraints within a fully artificial system and can serve as a bridge between structural and functional research communities to move neuroscientific understanding forwards.
An interesting perspective, discussing why human-like artificial intelligence cannot emerge without specific complex ingredients and boundary conditions. I have written short essays about something like this from a complementary perspective, here and here.
Traditionally, cognitive and computer scientists have viewed intelligence solipsistically, as a property of unitary agents devoid of social context. Given the success of contemporary learning algorithms, we argue that the bottleneck in artificial intelligence (AI) advancement is shifting from data assimilation to novel data generation. We bring together evidence showing that natural intelligence emerges at multiple scales in networks of interacting agents via collective living, social relationships and major evolutionary transitions, which contribute to novel data generation through mechanisms such as population pressures, arms races, Machiavellian selection, social learning and cumulative culture. Many breakthroughs in AI exploit some of these processes, from multi-agent structures enabling algorithms to master complex games such as Capture-The-Flag and StarCraft II, to strategic communication in the game Diplomacy and the shaping of AI data streams by other AIs. Moving beyond a solipsistic view of agency to integrate these mechanisms could provide a path to human-like compounding innovation through ongoing novel data generation.
Do rugged landscapes impair adaptive evolution? This is a long-standing question, at least since Wright introduced the concept of fitness landscape (1932). If you never did, you can read the original paper by Wright here. To answer this question, the authors produce a large landscape and discover it is highly rugged: nevertheless, it shows features of a smooth manifold, including a likely reachability of peaks. Wow!
Fitness landscape theory predicts that rugged landscapes with multiple peaks impair Darwinian evolution, but experimental evidence is limited. In this study, we used genome editing to map the fitness of >260,000 genotypes of the key metabolic enzyme dihydrofolate reductase in the presence of the antibiotic trimethoprim, which targets this enzyme. The resulting landscape is highly rugged and harbors 514 fitness peaks. However, its highest peaks are accessible to evolving populations via abundant fitness-increasing paths. Different peaks share large basins of attraction that render the outcome of adaptive evolution highly contingent on chance events. Our work shows that ruggedness need not be an obstacle to Darwinian evolution but can reduce its predictability. If true in general, the complexity of optimization problems on realistic landscapes may require reappraisal.
I find it exciting that, besides synaptic transmission, information transfer might be driven by bioelectric fields in some top-down way, making such fields effective control parameters:
It is increasingly clear that memories are distributed across multiple brain areas. Such “engram complexes” are important features of memory formation and consolidation. Here, we test the hypothesis that engram complexes are formed in part by bioelectric fields that sculpt and guide the neural activity and tie together the areas that participate in engram complexes. Like the conductor of an orchestra, the fields influence each musician or neuron and orchestrate the output, the symphony. Our results use the theory of synergetics, machine learning, and data from a spatial delayed saccade task and provide evidence for in vivo ephaptic coupling in memory representations.
Metabolically speaking, the human brain is one of the most energetically costly organs in the body, consuming nearly 20% of our metabolic energy, despite comprising only 2% of our body mass. How can we explain this fact from an evolutionary perspective?
The Expensive Brain hypothesis provides a unifying framework to explain the fact that the metabolic costs of a relatively large brain must be met by any combination of increased total energy turnover or reduced energy allocation to another expensive function such as digestion, locomotion, or production (growth and reproduction) → (see this paper)
In this new Perspective piece, the authors review a variety of studies supporting three strategies:
How have animals managed to maintain metabolically expensive brains given the volatile and fleeting availability of calories in the natural world? Here we review studies in support of three strategies that involve: 1) a reallocation of energy from peripheral tissues and functions to cover the costs of the brain, 2) an implementation of energy-efficient neural coding, enabling the brain to operate at reduced energy costs, and 3) efficient use of costly neural resources during food scarcity. Collectively, these studies reveal a heterogeneous set of energy-saving mechanisms that make energy-costly brains fit for survival.
Physics of living systems
Living tissues are characterized by an intrinsically mechanochemical interplay of active physical forces and complex biochemical signaling pathways. Either feature alone can give rise to complex emergent phenomena, for example, mechanically driven glassy dynamics and rigidity transitions, or chemically driven reaction-diffusion instabilities. An important question is how to quantitatively assess the contribution of these different cues to the large-scale dynamics of biological materials. We address this in Madin-Darby canine kidney (MDCK) monolayers, considering both mechanochemical feedback between extracellular signal-regulated kinase (ERK) signaling activity and cellular density as well as a mechanically active tissue rheology via a self-propelled vertex model. We show that the relative strength of active migration forces to mechanochemical couplings controls a transition from a uniform active glass to periodic spatiotemporal waves. We parametrize the model from published experimental data sets on MDCK monolayers and use it to make new predictions on the correlation functions of cellular dynamics and the dynamics of topological defects associated with the oscillatory phase of cells. Interestingly, MDCK monolayers are best described by an intermediary parameter region in which both mechanochemical couplings and noisy active propulsion have a strong influence on the dynamics. Finally, we study how tissue rheology and ERK waves produce feedback on one another and uncover a mechanism via which tissue fluidity can be controlled by mechanochemical waves at both the local and global levels.
If you didn’t know about its existence, the COVID-19 Forecast Hub has played a major role during the COVID-19 pandemic for modeling to inform policy and decision makers. Dozens of leadings groups in infectious disease modeling periodically submitted their estimates to the US CDC: this paper summarizes the main results and the ensemble method used to outperform each model in isolation. (See also this PNAS 2022 for other results).
Our ability to forecast epidemics far into the future is constrained by the many complexities of disease systems. Realistic longer-term projections may, however, be possible under well-defined scenarios that specify the future state of critical epidemic drivers. Since December 2020, the U.S. COVID-19 Scenario Modeling Hub (SMH) has convened multiple modeling teams to make months ahead projections of SARS-CoV-2 burden, totaling nearly 1.8 million national and state-level projections. Here, we find SMH performance varied widely as a function of both scenario validity and model calibration. We show scenarios remained close to reality for 22 weeks on average before the arrival of unanticipated SARS-CoV-2 variants invalidated key assumptions. An ensemble of participating models that preserved variation between models (using the linear opinion pool method) was consistently more reliable than any single model in periods of valid scenario assumptions, while projection interval coverage was near target levels. SMH projections were used to guide pandemic response, illustrating the value of collaborative hubs for longer-term scenario projections.
Errors in clinical decision-making are disturbingly common. Here, we show that structured information–sharing networks among clinicians significantly reduce diagnostic errors, and improve treatment recommendations, as compared to groups of individual clinicians engaged in independent reflection. Our findings show that these improvements are not a result of simple regression to the group mean. Instead, we find that within structured information–sharing networks, the worst clinicians improved significantly while the best clinicians did not decrease in quality. These findings offer implications for the use of social network technologies to reduce diagnostic errors and improve treatment recommendations among clinicians.
This is a very interesting overview of recent results obtained about polarization, through the lens of statistical physics/complexity science/network science. It misses to mention some very important works in the field, but it also provides good pointers in a consistent narrative.
Oldies but goodies
Cellular functions, such as signal transmission, are carried out by ‘modules’ made up of many species of interacting molecules. Understanding how modules work has depended on combining phenomenological analysis with molecular studies. General principles that govern the structure and behaviour of modules may be discovered with help from synthetic sciences such as engineering and computer science, from stronger interactions between experiment and theory in cell biology, and from an appreciation of evolutionary constraints.
Thanks for reading Complexity Thoughts! Subscribe for free to receive new posts and support my work.