If you find Complexity Thoughts interesting, follow us! Click on the Like button, leave a comment, repost on Substack or share this post. It is the only feedback I can have for this free service. The frequency and quality of this newsletter relies on social interactions. Thank you!
→ Don’t miss the podcast version of this post: click on “Spotify/Apple Podcast” above!
Foundations of network science and complex systems
Why cellular computations challenge our design principles

Biological systems inherently perform computations, inspiring synthetic biologists to engineer biological systems capable of executing predefined computational functions for diverse applications. Typically, this involves applying principles from the design of conventional silicon-based computers to create novel biological systems, such as genetic Boolean gates and circuits. However, the natural evolution of biological computation has not adhered to these principles, and this distinction warrants careful consideration. Here, we explore several concepts connecting computational theory, living cells, and computers, which may offer insights into the development of increasingly sophisticated biological computations. While conventional computers approach theoretical limits, solving nearly all problems that are computationally solvable, biological computers have the opportunity to outperform them in specific niches and problem domains. Crucially, biocomputation does not necessarily need to scale to rival or replicate the capabilities of electronic computation. Rather, efforts to re-engineer biology must recognise that life has evolved and optimised itself to solve specific problems using its own principles. Consequently, intelligently designed cellular computations will diverge from traditional computing in both implementation and application.

Genetic designs for stochastic and probabilistic biocomputing
Well, this should go in a section like “Synthetic biology“ but I think there is enough material to deserve a spot as a foundations paper.
The programming of computations in living cells is achieved by manipulating information flows within genetic networks. Typically, gene expression is discretized into high and low levels, representing 0 and 1 logic values to encode a single bit of information. However, molecular signaling and computation in living systems operate dynamically, stochastically, and continuously, challenging this binary paradigm. While stochastic and probabilistic models of computation address these complexities, there is a lack of work unifying these concepts to implement computations tailored to these features of living matter. Here we design genetic networks for stochastic and probabilistic computing, developing the underlying theory. Moving beyond the digital framework, we propose random pulses and probabilistic-bits (p-bits) as better candidates for encoding and processing information genetic networks. Encoding information through the frequency of expression burst frequency offers robustness to noise, while p-bits enable unique circuit designs with features like invertibility. We illustrate these advantages by designing circuits and providing mathematical models and simulations to demonstrate their functionality. Our approach to stochastic and probabilistic computing not only advances our understanding of information processing in biological systems but also opens new possibilities for designing genetic circuits with enhanced capabilities.
Biological Systems
Self-reproduction is one of the most fundamental features of natural life. This study introduces a biochemistry-free method for creating self-reproducing polymeric vesicles. In this process, nonamphiphilic molecules are mixed and illuminated with green light, initiating polymerization into amphiphiles that self-assemble into vesicles. These vesicles evolve through feedback between polymerization, degradation, and chemiosmotic gradients, resulting in self-reproduction. As vesicles grow, they polymerize their contents, leading to their partial release and their reproduction into new vesicles, exhibiting a loose form of heritable variation. This process mimics key aspects of living systems, offering a path for developing a broad class of abiotic, life-like systems.
We investigate mechanisms for the observed nonlinear growth in the number of polymer vesicles generated during a photo-Reversible Addition-Fragmentation Chain Transfer-based polymerization-induced self-assembly (PISA) reaction. Our experimental results reveal the presence of a self-reproduction process during which chemically active polymer protocells are chemically and autonomously generated in a light-stimulated one-pot reaction that starts from a homogeneous blend of non-self-assembling molecules and which, as observed microscopically, form vesicular objects that grow and multiply (reproduce) during irradiation with green light (530 nm) as the reaction proceeds. By using a filtration-based protocol, our experiments demonstrate that the self-reproduction process occurs concomitantly with the PISA process and results in a nonlinear increase in the number of polymer vesicles during photopolymerization which can only be ascribed to their reproduction via polymeric spores ejected from previously existing first-generation vesicles. The second and subsequent generations’ vesicles also self-reproduce and continue the process of population growth.
Polygenic prediction and gene regulation networks
Exploring the degree to which phenotypic variation, influenced by intrinsic nonlinear biological mechanisms, can be accurately captured using statistical methods is essential for advancing our comprehension of complex biological systems and predicting their functionality. Here, we examine this issue by combining a computational model of gene regulation networks with a linear additive prediction model, akin to polygenic scores utilized in genetic analyses. Inspired by the variational framework of quantitative genetics, we create a population of individual networks possessing identical topology yet showcasing diversity in regulatory strengths. By discerning which regulatory connections determine the prediction of phenotypes, we contextualize our findings within the framework of core and peripheral causal determinants, as proposed by the omnigenic model of complex traits. We establish connections between our results and concepts such as global sensitivity and local stability in dynamical systems, alongside the notion of sloppy parameters in biological models. Furthermore, we explore the implications of our investigation for the broader discourse surrounding the role of epistatic interactions in the prediction of complex phenotypes.
Neuroscience
Neuron–astrocyte associative memory
Recent experiments have challenged the belief that glial cells, which compose at least half of brain cells, are just passive support structures. Despite this, a clear understanding of how neurons and glia work together for brain function is missing. To close this gap, we present a theory of neuron–astrocytes networks for memory processing, using the Dense Associative Memory framework. Our findings suggest that astrocytes can serve as natural units for implementing this network in biological “hardware.” Astrocytes enhance the memory capacity of the network. This boost originates from storing memories in the network of astrocytic processes, not just in synapses, as commonly believed. These process-to-process communications likely occur in the brain and could help explain its impressive memory processing capabilities.
Key ideas in machine learning and AI drew initial inspiration from neuroscience, including neural networks, convolutional nets, threshold linear (ReLu) units, and dropout. Yet it is debatable whether neuroscience research from the last fifty years has significantly influenced or informed machine learning. Astrocytes, along with other biological structures such as dendrites and neuromodulators may offer a fresh source of inspiration for building state-of-the-art AI systems.
Astrocytes, the most abundant type of glial cell, play a fundamental role in memory. Despite most hippocampal synapses being contacted by an astrocyte, there are no current theories that explain how neurons, synapses, and astrocytes might collectively contribute to memory function. We demonstrate that fundamental aspects of astrocyte morphology and physiology naturally lead to a dynamic, high-capacity associative memory system. The neuron–astrocyte networks generated by our framework are closely related to popular machine learning architectures known as Dense Associative Memories. Adjusting the connectivity pattern, the model developed here leads to a family of associative memory networks that includes a Dense Associative Memory and a Transformer as two limiting cases. In the known biological implementations of Dense Associative Memories, the ratio of stored memories to the number of neurons remains constant, despite the growth of the network size. Our work demonstrates that neuron–astrocyte networks follow a superior memory scaling law, outperforming known biological implementations of Dense Associative Memory. Our model suggests an exciting and previously unnoticed possibility that memories could be stored, at least in part, within the network of astrocyte processes rather than solely in the synaptic weights between neurons.
Influence of topology on the critical behavior of hierarchical modular neuronal networks
Understanding how the brain maintains stable, yet flexible, activity is a central question in neuroscience. While previous work suggests that criticality–when neurons are poised near a phase transition –supports optimal brain function, how network architecture affects this condition remains unclear. Here, we study hierarchical modular neuronal networks composed of stochastic spiking neurons with adaptive dynamics. We show that network topology significantly influences critical behavior, with sparse modular architectures sustaining criticality more robustly than fully connected ones. Our simulations reveal that homeostatic mechanisms can stabilize activity near criticality, even as modular interactions introduce structural inhomogeneities. These inhomogeneities can produce quasicritical dynamics and Griffiths-like phases, broadening the range of near-critical behavior. Our work highlights the role of structural organization in shaping emergent brain dynamics and offers new insights into how biological networks may tune themselves to operate near criticality.
A computational approach to evaluate how molecular mechanisms impact large-scale brain activity

Assessing the impact of pharmaceutical compounds on brain activity is a critical issue in contemporary neuroscience. Currently, no systematic approach exists for evaluating these effects in whole-brain models, which typically focus on macroscopic phenomena, while pharmaceutical interventions operate at the molecular scale. Here we address this issue by presenting a computational approach for brain simulations using biophysically grounded mean-field models that integrate membrane conductances and synaptic receptors, showcased in the example of anesthesia. We show that anesthetics targeting GABAA and NMDA receptors can switch brain activity to generalized slow-wave patterns, as observed experimentally in deep anesthesia. To validate our models, we demonstrate that these slow-wave states exhibit reduced responsiveness to external stimuli and functional connectivity constrained by anatomical connectivity, mirroring experimental findings in anesthetized states across species. Our approach, founded on mean-field models that incorporate molecular realism, provides a robust framework for understanding how molecular-level drug actions impact whole-brain dynamics.
Modular arrangement of synaptic and intrinsic homeostatic plasticity within visual cortical circuits
Homeostatic plasticity maintains normal brain functions by constraining various network features of neural circuits, yet how this is realized on the cellular level remains unknown. Synaptic and intrinsic forms of homeostatic plasticity adjust different aspects of neuronal excitability to preserve circuit excitation–inhibition balance; here, we show that they sense distinct aspects of network activity in visual circuits and can thus be independently recruited by visual experiences that shape specific network features. This modular arrangement allows these homeostatic mechanisms to regulate distinct network features and ensures that neural circuits are resilient to a wide range of perturbations.
Neocortical circuits use synaptic and intrinsic forms of homeostatic plasticity to stabilize key features of network activity, but whether these different homeostatic mechanisms act redundantly or can be independently recruited to stabilize different network features is unknown. Here, we used pharmacological and genetic perturbations both in vitro and in vivo to determine whether synaptic scaling and intrinsic homeostatic plasticity (IHP) are arranged and recruited in a hierarchical or modular manner within layer 2/3 (L2/3) pyramidal neurons in the rodent primary visual cortex (V1). Surprisingly, although the expression of synaptic scaling and IHP was dependent on overlapping signaling pathways, they could be independently recruited by manipulating spiking activity or NMDA receptor (NMDAR) signaling, respectively. Further, we found that changes in visual experience that affect NMDAR activation but not mean firing selectively trigger IHP, without recruiting synaptic scaling. These findings support a modular model in which synaptic and IHP respond to and stabilize distinct aspects of network activity.
Human behavior
Comparative evaluation of behavioral epidemic models using COVID-19 data
Modeling the interplay between human behavior and infectious disease transmission remains one of the key challenges in Epidemiology. In this study, we evaluate the performance of three mechanistic behavioral epidemic models designed to address this issue. We compare data-driven and analytical approaches across the first COVID-19 wave, spanning nine diverse locations and two modeling tasks. While the optimal model may vary depending on factors such as data availability and geography, our findings show that approaches explicitly modeling behavioral feedback mechanisms often outperform data-driven approaches, even when considering data quality and the increased numbers of free parameters of these models.

Characterizing the feedback linking human behavior and the transmission of infectious diseases (i.e., behavioral changes) remains a significant challenge in computational and mathematical epidemiology. Existing behavioral epidemic models often lack real-world data calibration and cross-model performance evaluation in both retrospective analysis and forecasting. In this study, we systematically compare the performance of three mechanistic behavioral epidemic models across nine geographies and two modeling tasks during the first wave of COVID-19, using various metrics. The first model, a Data-Driven Behavioral Feedback Model, incorporates behavioral changes by leveraging mobility data to capture variations in contact patterns. The second and third models are Analytical Behavioral Feedback Models, which simulate the feedback loop either through the explicit representation of different behavioral compartments within the population or by utilizing an effective nonlinear force of infection. Our results do not identify a single best model overall, as performance varies based on factors such as data availability, data quality, and the choice of performance metrics. While the Data-Driven Behavioral Feedback Model incorporates substantial real-time behavioral information, the Analytical Compartmental Behavioral Feedback Model often demonstrates superior or equivalent performance in both retrospective fitting and out-of-sample forecasts. Overall, our work offers guidance for future approaches and methodologies to better integrate behavioral changes into the modeling and projection of epidemic dynamics.